Olivier Coudert on September 20th, 2014

Why comment codeWriting code is not simply about putting language constructs together. It’s about architecture, complexity analysis, tradeoffs, testing, measuring performances, etc. And it’s about making sure that developers (other people as well as the original author) can later read and understand that code, for bug fixing or enhancement. This is where comments come into play (not to be confused with documentation!).

Developers put comments into their code to explain what it does, how it does it, to spell out assumptions, to warn about exceptions, etc. Commenting code is considered as a serious commodity in the world of code quality. People have been measuring the ratio comments/code, and came out with empirical rules. It is widely recommended that about 30% of the source code be comments.

So what is a good ratio? Is 30% of comment sufficient? Should it be more, or is 15% OK? And why?

I label comments as follows:

  1. Redundant: the comment states the obvious and does not bring any information. It is a waste of space.
  2. Obsolete: the code changed, but the comment was not updated. It is irrelevant at best, misleading and confusing at worst.
  3. Incorrect: the comment only reflects the author’s confusion. It is as misleading as (2).
  4. Informative: none of the above.

In my experience most of the comments fall into category 1: a pollution that does not bring any information that the actual source code does not already provide. Category 2 is quite common too: as the code evolves, the comments do not always keep up. How often do I see useful comments?

  • If a comment is necessary to explain what a function does or what a variable represents, I argue that the name of the entity (function, class, type, data member, variable, etc) should carry that meaning.
  • If the comment is about an assumption, an invariants, or an exception, then some extra code guarded with an assert() is a non-ambiguous statement, better than any explanation.

Code should be as self-explanatory as possible. The API, the name of the constructs, the architecture, the assertions, the exception handling: all those aspects bring so much more information and intend than a few sentences in English (replace with your favorite dialect used to write comments). Natural languages are inherently ambiguous; they cannot be trusted to convey formal or logical assessments.

I find a comment to be valuable only in the following cases:

  • To refer to a specific, non-trivial algorithm.
  • To leave a note for future enhancement.
  • To make a joke.

So what is the adequate percentage of comment? If your code reads like a perfect prose, then you should have 0% comment. Practically, it is a few percent (5% is more than enough), leaving notes, references, or clarifications on hardcore algorithms. Anything more is pollution, or the symptom of a code that can be written better.

“Who truly writes good code hardly needs any comment”



Olivier Coudert on January 31st, 2014

40154577-1-610x393-cloud-umbrella-businessmanI started 2013 very upbeat about EDA in the cloud. I pictured EDA very slowly moving to a SaaS business, using public clouds as a scalable infrastructure to adjust to the irregular computing resource requirements throughout the lifetime of a complex system design. I knew it would take years, but I felt that the moment was right.

I bet a year ago that a major semiconductor company would use public cloud to peak their compute power (e.g., for logic simulation, extraction, physical verification) and meet their schedule.

Well, so far I only saw a few attempts, and a lot of aborted discussions.

We first saw Cadence making some of its design solutions available in its private cloud, with only a handful of customers to date.

We all saw Synopsys putting its VCS solution available in AWS back in 2011. With exactly zero customer to show.

We saw Nimbic putting its SW in the cloud. With unclear outcomes.

What else did I see over the past 2 years (names purposely omitted)?

I saw a top-3 EDA vendor successfully conducting a real-scale experiment to reduce design validation from days to less than one hour, using 1000’s of cores dynamically allocated in a public cloud. But at the end, the viability of the solution depended on all the partners temporarily putting their IP in the cloud, and an agreement was never reached.

I saw a top-15 semiconductor company test-driving a cloud-based solution to transparently augment their compute power. This on-demand service allowed them to bring 1000’s of core fully operational in less then 15mn, ready for simulation and verification. But at the end, the old-fashioned way prevailed: buy more machines, even if that means that lots will stay idle; or simply miss the deadline.

I saw another semiconductor company opened to using cloud for the same purpose (peak compute power). But they eventually gave up, although that time for a good reason: economics. They are better off with their own high-performance data center where they can squeeze out every bit of performance of their very pricey EDA licenses. And we are talking serious high-performance computing: 2000 cores overclocked to 4GHz with liquid cooling, and a GPFS at 5GB/s for the file system. Pretty good setup, which even Amazon’s HPC cannot match.

I saw several EDA startups embracing the concept, but unable to drive more sales with it.

And I saw many potential customers turning down the prospect of EDA in the cloud, essentially for one and one reason only: security.

Of course, this is not always the real reason. Quite often the company’s IT department will work hard to obliterate the possibility of using a public cloud –because that would mean less power for them. But security is the sure topic to scare away the executives.

It does no matter if you demonstrate VPN, encrypted communications, with dynamically changing keys. It does not matter if you show that data-at-rest is always encrypted and is never in clear but in RAM. It doesn’t matter if you have a monitoring system that reports any activity that does not fall within some acceptable scenario. It doesn’t matter if the person arguing against cloud-based solutions because of security has no second thought sharing her credit card information for her on-line shopping. It doesn’t matter if every study shows that the most likely source of a security breach in a company does not come from some outside hackers, but from within: its own employees. Security is the bogyman in the world of semiconductor.

It is striking to see so many industries moving to SaaS and public cloud. One simple example: Big Data. Yes, it is a buzzword, but for many it is very real, and could not be achieved without software operating in public or hybrid clouds (don’t take my word for it, look it up). Seeing the complexity of HW/SW only increasing, I am a bit at a loss to justify the reluctance of the electronic design industry to embrace cost-efficient, flexible solutions.

So unless there is a pressing reason (read: the economics trumping the politics), it is unlikely that EDA evolves to SaaS in the cloud as quickly as I hoped. Looks like the whole industry needs some serious rejuvenation at the top.

Tags: , ,

Olivier Coudert on November 24th, 2013

website-speedOne of the most important factors that define the online experience is speed: how long does it take for a page to be displayed by the browser?

Every study shows that if it takes too long, people give up and leave the page. Back in 2006 a person shopping online expected a page load in 4 seconds or less. That decreased to 2 seconds in 2009. Nowadays it is below one second. According to Google, latency greater than 300ms is perceivable by users.

I started this blog 4 years ago, and since the beginning I have been monitoring its traffic with WordPress’ plugins. It reached a steady audience 3-4 months after its launch, and overtime I saw its audience changes from EDA-only to software-heavy, cruising after 15 months.  The main factor for visits is obviously whether the posts bring any value to readers (subjective criteria), as well as how easy they are to find in search engines (objective criteria). Only recently I wondered whether page speed was a factor.

First thing first: I started using Google analytics for accumulating measurements. It gave me a great insight about the visitors –where they are located, how they got to the site, how long they spend on it, etc. However page load data were disappointing, because fairly inaccurate.

So I started to use a few on-line tools to test and analyze the speed. I found that pingdom and webpagetest were great for the detailed waterfalls they provide, breaking down the page load time into its several components –DNS lookup, connection to server, data sending, etc.

One great feature of webpagetest is that it reports load page time at the first view and repeat view. “First view” assumes that no browser caching is enabled, which is the worst-case first-time visitor. “Repeat view” leverages browser caching for returning visitors. Obviously repeat view can be significantly faster that first view.

webpagetest SC front page

There are also various sites to analyze the page content and grade it according to rules targeting speed. For instance, are the images properly compressed, how many redirections are performed, is the page minified, is there any blocking JavaScript and CSS, etc. The most well known sites are Google’s PageInsights and Yahoo’s YSlow. Although they do not measure the actual page load time, they give valuable recommendations to improve speed. More importantly, those grades are taken into consideration by search engines when ranking results. Thus regardless of whether these recommendations improve the page load time significantly, it might help its ranking.

The waterfall analysis was striking. Although I had a fairly large variance, due in part to the fact that my blog is shared hosted, I was surprised to see that some of my posts took up to 20 seconds to fully load! Also in some cases I spend over 4 seconds doing multiple DNS lookups. Not surprisingly, my PageInsights and YSlow grades were pretty poor.

First thing first: I had to use good practices for speed. This means:

  • Optimize images with proper compression
  • Gzip the data sent by the server
  • Minify the page source (CSS, JavaScript, HTML)
  • Limit the use of blocking JavaScript and CSS
  • Avoid URL redirects and DNS lookups
  • Leverage browsing caching

Image optimization

To optimize images, I used Yahoo’s Smush.it, which does an excellent job at compressing images without loosing visual quality. Also it is readily available via WordPress’ plugin WP Smush.it, which makes it configuring and using it a no-brainer.

DNS lookup optimization

Considering that I spent up to 4 seconds doing DNS lookup of my own site, I looked at:

  • Avoid URL indirections
  • Use a better DNS service

When I started my blog, I added two redirection rules in my .htaccess file, so that the address ocoudert.com/blog was mapped to http://ocoudert.com/blog/, itself redirected to http://www.ocoudert.com/blog/. That way all the entry points to the site ends up on the same unique address. This was to make sure that search engines would see these three URLs as referring to the same content. Unfortunately that meant extra DNS lookup, which turned out to be pretty costly in terms of page load time.

Eventually I focused on the second aspect. I moved from using my web hosting’s default DNS servers to using CloudFlare’s. CloudFlare is a CDN and DNS service. You can signup for free and re-assign your domain name to their DNS servers. CloudFlare is not the only company providing DNS servers for free, but it was certainly extremely simple to do, and the impact was impressive: the overall DNS lookup time to ocoudert.com dropped to below 300ms.

Page optimization

For page optimization and caching, I tried the two best speed optimizer WordPress plugins: WP Super Cache and W3 Total Cache (respectively referred to as WPSC and W3TC in the sequel). Both feature page rewriting (minifying, compression) and server-side page caching (deliver a cached static page instead of dynamically generated content). They also feature CDN (Content Delivery Network) setup.

WPSC is incredibly simple to setup.  Make sure you:

  • Use the most aggressive caching method (i.e., mod_rewrite)
  • Enable compression
  • Enable preload (instead of waiting for a first visit to trigger caching, a static page is built in advance).

W3TC is way more complicated. There are dozen of options, some of them requiring a deeper understanding of page rendering. It however offers more features than WPSC, most notably: browser caching (a complete set of expires directives), moving blocking JavaScript to the end of the page (this helps reducing the perceived latency of the page), and caching of database and object requests.

WP Super Cache or W3 Total Cache?

If you want all the bells and whistles, you want to go with W3TC. Especially if you have a dedicated server and enough traffic, W3TC will do more for you. Make sure you understand the impact of the many options –e.g., caching database and object requests on a shared hosting site will likely slow down the server. W3TC requires a more complex and lengthy setup, and you will need more time to experiment and find the best configuration.

If you have a simple shared-hosting site like me, WPSC will do the job just as well for a setup that takes no more than a minute.

Keep in mind that W3TC does get your Google/Yahoo page grade higher than what you can get with WPSC, which might help your page search rank. As shown below, my front page got a 66/76 PageSpeed grade with SC, and 92/93 with W3TC, which is considerably higher. I however experimented with both WPSC and W3TC for one month each, and I didn’t see any significant ranking difference.

pagespeed SC front page

pagespeed TC3 front page 1


I went from un-optimized, dynamic content, to:

  • Mostly static, compressed pages
  • Page rewriting (minified, non-blocking JavaScript)
  • Better DNS servers for the domain ocoudert.com
  • Browser caching properly enabled

First-visit maximum page load went from 20 seconds to under 4 seconds. I didn’t see any visible page rank change. I saw a small uptick in traffic, but it may be unrelated to the site speedup –I’ll see in a few weeks whether it subsists. At least I know that visitors to the site have now a better experience.


Olivier Coudert on October 27th, 2013

ssh-everywhereAWS has made virtual machines (EC2) ubiquitous. You can launch and stop them as will, log into them, create new accounts, etc. Then you start digging into remote control for multiple users. How do I set up a ssh connection between my local client and a remote machine? Which key should I use? How do I setup a passwordless ssh? These are the practical questions answered in this post.

ssh, which stands for Secure Shell, is a network protocol that establishes an encrypted communication between two hosts. It is used to securely launch a command on a remote host, to log into that host, or to securely move files between the two hosts (via the command scp). It uses public and private keys to establish a handshake between the two hosts, so that they can agree on a fast cipher for the rest of the communication.

Assume you are user johnsmith on localhost, and you want to set up a passwordless ssh as johnny on remotehost. First thing you need is to generate your public and private keys. This is done with ssh-keygen:

johnsmith@localhost:~$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/johnsmith/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in id_rsa.
Your public key has been saved in id_rsa.pub.
The key fingerprint is:
a1:81:5a:05:d0:be:fe:cd:27:37:aa:58:57:24:c1:70 johnsmith@localhost
The key's randomart image is:
+--[ RSA 2048]----+
|  .o..ooE        |
|    .o ...       |
|   .o . o .      |
|   o.  o +       |
|  .  .. S .      |
|    .    .       |
|   .  . .        |
|    .o +. +      |
|    ..o.+= .     |

You will be prompted for a key name and a passphrase. Leave the passphrase empty. By default the public and private keys are stored in directory ~/.ssh as id_rsa.pub and id_rsa respectively. You can choose to name the keys as you please. This is useful when you want to access different remote systems with different keys –more on this later.

The type of the key, indicated with the option –t, is either rsa or dsa. It describes the cryptographic algorithm used for the handshake. It is still debated whether RSA or DSA is better. I would recommend RSA though because:

  1. ssh-keygen forces the DSA key to be exactly 1024 bits. On the other hand, the RSA key bit length can be set using the –b option, up to 2^15 = 32768 bits. The RSA key is 2048 bits by default, which is considered safe.
  2. DSA is pushed by government agencies…

Whenever you ssh the remote host as johnny, you need to enter johnny’s password. That is not practical when writing scripts that invoke commands on remote machines. Also a password can be guessed or stolen –using brute-force approaches, or with social engineering techniques (phishing, impersonation, etc).

Much more practical and safer is to have a passworless ssh. We are going to allow public key-based authentication on the remote host. This means that a guest with a private key that matches a public key authorized by the remote host will be granted access without a password.

First, we need to add your public key to the remote host’s set of authorized key. Simply login to the remote host, edit its ~/.ssh/authorized_keys file, and cut-and-paste your public key (in our example, id_rsa.pub) to the file. We can also use scp to transfer the file. scp stands for Secure Copy, and is built on top of ssh:

# Copy the public key on the remote host as 'foo'
johnsmith@localhost:~$ scp ~/.ssh/id_rsa.pub johnny@remotehost:foo
johnny@remotehost's password: 
id_rsa.pub                                  100%  425     0.4KB/s   00:00
# Log into remote host
johnsmith@localhost:~$ ssh johnny@remotehost
johnny@remotehost's password: 
# Add the public key to the authorized keys
johnny@remotehost> cat foo >> ~/.ssh/authorized_keys ; rm foo

Make sure that the file permission is proper: the .shh files may not be written or read but by the user.

johnny@remotehost> chmod 700 ~/.ssh
johnny@remotehost> chmod 600 ~/.ssh/authorized_keys
johnny@remotehost> ls –al ~/.ssh
total 12
drwx------  2 johnny johnny 4096 Oct 25 01:39 ./
drwx--x--- 19 johnny nobody 4096 Oct 26 04:31 ../
-rw-------  1 johnny johnny 425 Oct 25 01:39 authorized_keys

Finally, make sure that the remote host’s ssh is configured for public key-based recognition. Look for the following lines in the /etc/sshd/sshd_config file, which configures the ssh server:

PubkeyAuthentication yes
RSAAuthentication yes
PasswordAuthentication no

The first two lines authorize authentication with an RSA public key. The third line specifies whether a username + password authentication is allowed. If set to no, only passwordless ssh is allowed on the remote host: password cracking on the host becomes irrelevant. You can access the remote host only if you have the correct private key.

Note that if you need to modify the /etc/sshd/sshd_config configuration file, you will have to restart the ssh server:

johnny@remotehost> /etc/init.d/ssh restart

That’s it. The remote host will grant (passwordless) access to guests with the appropriate private key only –no more password. Your local client is the only system to store your private key, thus only you can access the remote host.

If you want to manage several remote systems independently, you can create a key pair for each system, for instance naming the keys host1_id.pub and host1_id for host host1. You can then ssh a specific host using the –i option to authenticate yourself with the appropriate private key, e.g.:

johnsmith@localhost:~$ ssh –i ~/.ssh/host2_id [email protected] 'echo "I am `who am i` on `hostname`"'
I am johnny on host2

Have fun with your array of remote hosts, and be safe!


Olivier Coudert on September 26th, 2013

Type casting consists of converting an expression of a given type into another type. It can be done by explicitly telling the compiler which type the expression must be converted to, for instance:

float x = 3.14;
int i = int(x);  // i is assigned to 3.
A* a = foo();
B* b = static_cast<B*>(a);
C* c = reinterpret_cast<C*>(b);

Square Peg in a Round HoleIt can also be done implicitly by letting the compiler decide which type conversion is appropriate to successfully compile the source code. It follows type conversion rules, e.g., walk up the class hierarchy to find the implementation of a non-virtual method. This makes the code simpler to write and easier to read. However adding one’s own rules to the lot can lead to subtle bugs, whose cause is rooted in implicit type casting. I will illustrate with a simple example, inspired by a recent real case.

Let us say that you have a class A, and 100,000’s lines of legacy code using pointers to A. The code looks something like this. Note how the code tests for NULL pointers (lines 17, 19, 21): pointers are implicitly converted to Boolean, which is perfectly correct.

class A;
typedef A* ptra;

class A {
  int foo(ptra a, ptra b);
  int bar();
  int operator[](int i);

int A::foo(ptra a, ptra b) {
  if (a && b) {
    return (*a)[b->bar()];
  } else if (a && !b) {
    return a->bar();
  } else if (b) {
    return b->bar();
  } else {
    return 0;

Later the behavior of the type ptra was extended (for instance, to add a reference count). A class wrapping ptra was added to avoid disrupting the existing code and public APIs. It looked like this.

template <class T>
class Ptr {
    Ptr() : p_(NULL) {}
    Ptr(T* p) : p_(p) {}

    T* get() const { return p_; }
    T* operator -> () const { return p_; }
    T& operator * () const { return *p_; }

    operator bool () const { return (p_ != NULL); }
    T* p_;

typedef Ptr<A> ptra;

Note the type conversion of Ptr to bool defined in line 37. Thanks to it, the existing code that implicitly checks for NULL pointers does not need to be touched: the statement ‘if (a) {...}‘ will behave as before. However, the implicit type conversion has unexpected effects. Consider the following code.

typedef Ptr<int> PtrInt;
typedef Ptr<char> PtrChar;

int main() {
    int i1;
    int i2;
    PtrInt a1(&i1);
    PtrInt a2(&i2);

    PtrChar b2((char*)&i2);

    assert(a1.get() != a2.get());
    if (a1 != a2) {
        cout << "OK: a1 != a2\n";
    } else {
        cout << "WRONG: a1 != a2 failed.\n";

    if (b2 == a2) {
        cout << "OK: b2 == a2\n";
    } else {
        cout << "WRONG: b2 == a2 failed.\n";

    if (b2 != a1) {
        cout << "OK: b2 != a1\n";
    } else {
        cout << "WRONG: b2 != a1 failed.\n";

    return 0;

It produces the following output:

ocoudert:~/src$ g++ sample.cc && a.out
WRONG: a1 != a2 failed.
OK: b2 == a2
WRONG: b2 != a1 failed.

Quick, which output lines are correct? The answer is: none of them.

Line 61 and 67 are problematic: they compare two objects templated with two different classes. They should not even compile! Line 55 goes against the intuition: both a1 and a2 do wrap different pointers, as asserted by line 54.

In line 55, lacking an explicit Ptr::operator== definition, both a1 and a2 are implicitly converted to a bool. Because a1 and a2 are non null, they are seen as true values, resulting in the ‘else’ branch to be executed. Thanks to the implicit promotion to bool, line 61 and 67 compile properly, but the result of such comparisons is irrelevant.

To fix the problem on line 55, we need to explicitly define comparison on Ptr.

template <class T>
bool operator == (const Ptr<T>& a, const Ptr<T>& b) { return a.get() == b.get(); }
template <class T>
bool operator != (const Ptr<T>& a, const Ptr<T>& b) { return a.get() != b.get(); }

This will produce:

ocoudert:~/src$ g++ sample.cc && a.out
OK: a1 != a2
OK: b2 == a2
WRONG: b2 != a1 failed.

For lines 61 and 67, we need to decide of the semantics when comparing two Ptr instantiated with different classes. Let’s say that the comparison should be on the type, not the pointer:

template <class T, class U>
bool operator == (const Ptr<T>& a, const Ptr<U>& b) { return false; }
template <class T, class U>
bool operator != (const Ptr<T>& a, const Ptr<U>& b) { return true; }

We now have:

ocoudert:~/src$ g++ sample.cc && a.out
OK: a1 != a2
WRONG: b2 == a2 failed.
OK: b2 != a1

If we choose to carry the comparison on the pointer itself, we would write:

template <class T, class U>
bool operator == (const Ptr<T>& a, const Ptr<U>& b) { return a.get() == (T*)b.get(); }
template <class T, class U>
bool operator != (const Ptr<T>& a, const Ptr<U>& b) { return a.get() != (T*)b.get(); }

And we get instead:

ocoudert:~/src$ g++ sample.cc && a.out
OK: a1 != a2
OK: b2 == a2
OK: b2 != a1

However we are still not out of the woods. We need to consider all the Boolean operators. For example we can still write ((a1 < a2) || (a2 < a1)), and that expression will evaluate to false. The complete approach should follow the safe bool idiom.

The conclusion: implicit type casting makes the code easier to write and more legible; it is also a great tool to work with legacy code one cannot afford to change. But this must be done very carefully. Unwanted implicit type casting may create more problems than they solve.


Olivier Coudert on July 30th, 2013

zen-rock-gardenLess is often better. In mathematics, physics, and arts, simplifying and shedding every bit of complexity and redundancy have produced remarkable results. It leads to abstraction, elevates expressiveness, and reveals patterns that are otherwise buried in details.

Programming is no different. For a developer that looks for correctness (does my program behave as expected?), efficiency (does my program use CPU and memory resources appropriately?), and robustness (can I reuse my program or extend it easily for future applications?), minimalism is a sure guideline. The beauty of simplicity translates into better programming.

  1. The less code, the less bugs. Reduce the size of your code with factorization, dead code removal, and the use of standard libraries.
  2. Keep function size small. If it doesn’t fit on your screen, it is probably too complex.
  3. Keep the depth of control structure small. Too many imbricated conditional and loop statements are hard to understand.
  4. Minimize accessibility. Make your variables and functions private or protected whenever possible.
  5. Minimize the lifetime of variables. Use persistent data as the last resort.
  6. Minimize the scope of variables. This makes reading the code simpler. And it helps the compiler to better optimize the code.
  7. Make the API as simple as possible. It helps eliminating redundancy and strengthening semantics.
  8. Use meaningful, standardized names for classes, functions, and variables. Just reading the name of a function should be enough to understand what it does.
  9. Clearly differentiate which data is mutable and which is not in the scope of a function. Use const declaration whenever possible.

Now go and rake your Zen garden.

Tags: ,

Olivier Coudert on July 15th, 2013


I was prompted to write that post after days (weeks) of frustration working with a new company. I thought that would capture the essence of what software development should be.

A bit of context first.

There are people out there whose motto is “we make money selling hardware, not software”. Because of this, they are misled to believe that software is just an addendum to their hardware, and therefore software does not deserve the attention received in “real” software companies. In the same line of their thinking, software bugs can be “patched”, so we shouldn’t invest as much into it.

How delusional these people are.

Just to pick one example close to our field, remember the debacle of Altera in the early 2000’s? At the time, they were #1 in the FPGA market. Then they choose to release a half-cooked, mostly untested release of their new place-and-route software. The damn thing just didn’t work, to the point that some Altera sales people recommended their own customers to hold on the old release, or to just temporarily switch to Xilinx… 18 months later, Altera lost its top stop to Xilinx, which Xilinx still owns today.

Other examples of hardware-selling companies that missed the point on software are legions in the telecom industry: Motorola, Nokia, Research In Motion, for the most obvious ones. How about the blasting success of Apple’s iPad with its iOS, still leading the market despite the many contenders running some flavors of Android, without even mentioning Microsoft with its Surface tablet.

Do I need to say more? It’s not because your primary products is hardware that you can afford to overlook software. You might give your software away, you might have less than 100 users, but your software still requires the same attention as if it was the main source of your revenue, or if you had millions of users. Trying to get away with poor software will kill your business, every single time.

Software engineering relies on the following practicalities:

  • Testing
    • Code that you cannot test is useless. Design code so that it can be tested.
    • Code that is not tested is just a time bomb waiting to explode. “Developing code” really means writing code and writing tests.
    • Unit test every time you can (see more here). This forces developers to flush out every single detail of an API and its implementation. And unit tests run fast.
  • API design. Designing a good API is difficult (see more here). Do spend the time to get it right. Else you’ll pay a hefty price later.
  • Automation. Building, testing, versioning, code merging, all must be automated. Developers should just type a single command line and be done with it. Anything else is time consuming and error-prone.
  • Quality measurement. This covers a lot of aspects, but enough to say that you can only improve what you can measure –speed, memory usage, robustness, coverage, etc.
  • Uniformity. Do have one coding style. Do have a unique way of building your software and testing it. This way you factorize the effort and increase efficiency.
  • Short turn-around-time. You want a fast development cycle? Then make your build and testing as fast as you can, so that developers can write, test, and commit code in small, manageable increments. Do use parallel build on a grid. Do use pre-compiled libraries. But don’t compromise automation when doing so.

That’s it. No fancy theories, no pompous methodologies, just down-to-earth, self-explanatory practices. Or so I wish.

Tags: ,