2015-03-02

Go, GitHub and Travis : a small lesson in dependencies

Recently, I ran into a small, but interesting, issue in a new open-source project that I am working on: RxnWeaver. It is being developed in Go. Inspired by a few other projects that I follow, I set up Travis CI for this project.

I do the development in my fork of the project, before raising periodic (or need-based) pull requests to the main repository.

In a particular commit, I happened to add a few exported constants to a package (let us call this Package A) in the repository. In the next commit, I added code that depended on some of those constants, to a different package (let us call this Package B) in the repository. As usual, I pushed the commits to my GitHub fork after making sure that the code was formatted with go fmt and that the tree builds without errors. I raised a pull request to the main repository, and the fun began!

Travis CI reported failure saying that the build did not complete successfully. A little investigation revealed the cause. In Package B, references to Package A use the official github.com/RxnWeaver/RxnWeaver import paths. The official version of the package there, however, has the old set of constants that does not include the new and required ones. Therefore, Package B could not be built.

The commit, though, that was intended to update Package A was part of the same pull request!

Of course, it was easy to fix, but the simple lesson is: if you do not want to have to use the command line and some git trickery, do not forget to raise a pull request (and have it merged) for each piece that could be used as a dependency in a different Go package!

2014-03-04

On Effects

Background

Today, I happened to watch a panel discussion from Lang.NEXT 2012. It featured Anders Hejlsberg, Martin Odersky, Gilad Bracha and a certain Peter Alvaro, and was anchored by the inimitable Erik Meijer. At the end of what turned out to be a very interesting discussion, a Q&A session was hosted. The last few questions relate to effects-checking at the language level. Odersky responded by saying that effects-checking in static type systems is currently at about the same primitive level as where static typing itself was in Pascal: clunky and cumbersome!

Watching that video finally prompted me to collect my thoughts into a written note.

The Beginning

The first time I pondered the question of effects was around 1994-5, when I started working on C++ programs with sizes in the range of a hundred thousand to a million lines of code. A prominent stimulus was the const annotation on methods. The C++ compiler was capable of tracing changes to the current object made by a given method. In particular, I took note of the transitive nature of const. This transitive nature was simultaneously useful and painful. The const-ness problem in C++ is well known and well documented as well.

Curiously, a method annotated const could still call external code, perform I/O, etc., as long as it did not mutate the current object. Often, I wanted a const method to not get too intelligent and perform unspecified operations. This was particularly true of third-party code without accompanying source. But, C++ provided no mechanism to declare and enforce any such.

Unmet Need

I went on to write programs for IBM 360/370 family of mainframe operating systems, a few flavours of UNIX, Windows NT, etc. Over the years, numerous times, I felt the same need for better guarantees on methods (and functions/procedures). I found it interesting that none of the languages that I worked with, in any of those environments, provided a solution to this problem.

Every once in a while, I would think of effects. Those were mostly unstructured thoughts, though. In addition, I was a typical applications programmer, with no formal background in computer science and programming languages. Having moved from theoretical Physics into programming, I often tried drawing analogies and parallels. Some of them were useful – sometimes and to some extent – but would always breakdown eventually.

Explicit Effects

By 1998-9, I had begun developing a better appreciation for dynamically-typed languages. Not weakly-typed languages such as Perl and Tcl, but strongly-typed ones such as Smalltalk, Python and (later) Ruby. I had accidentally come across the Smalltalk Blue Book by Goldberg and Robson. It opened my eyes to several new windows and doors! I employed Python and Ruby in a variety of projects, with great results for my clients. In the process, for a few years, I did not explore statically-typed languages. Nonetheless, the issue of effects surfaced time and again, particularly as the sizes of code bases and teams increased.

I returned to a large (> a million LoC) C++ project in 2002, and that work stirred my thoughts on effects yet again. Based on my experiences, I began collecting a wish list of the kinds of effects that I wanted the compiler to track. Towards, that I began comparing my thoughts to the facilities provided by some of the languages that I used or became aware of.

Java

Java's checked exceptions force a method to either handle a thrown exception or re-throw the same, and declare so in the method signature. While no other effects can be declared, a consumer knows from the signature that such a method may not return a value, but may throw one of the specified exceptions.

Haskell

I came across Haskell in 2003. I found the basics easy enough to follow, and wrote small exercise programs to gain some familiarity with it. Those days, there were not many easy tutorials for beginners, requiring some research into the scant documentation rather frequently. As I read about Haskell, I found three interesting aspects standing out [1]:

  • all Haskell functions are technically unary,
  • its system of type classes, and
  • its effects system.

The latter, of course, is relevant to the current discussion. Haskell does not require us to say anything specific about a pure function. On the other hand, when a function is not pure, Haskell requires us to utilise an appropriate type (usually a monad) to indicate the specific manner in which the function causes side effects. This allows for user-defined monads to specialise the kind of effects caused by a function. Once defined and used, these monads are utilised by Haskell's powerful type system to ensure consistency of use across the program.

Much later, I happened to watch the video of a talk by Erik Meijer, in which he remarked there are many ways for a function to be impure, but there is only one way to be pure. And, the dots connected!

D

D follows the approach of C++, but takes it further. Unlike C++'s const, a pure function in D must be free of side effects. This is a much stronger guarantee, and helps significantly. However, notice the difference between Haskell's philosophy and D's: functions (and methods) in D are assumed impure by default. And, thus, pure functions (and methods) have to be explicitly marked pure.

Nimrod

An interesting variation can be found in Nimrod. It provides some pragmas to specify effects[2]. In particular, we can specify the possible exceptions thrown by a proc or a method. If it does not declare any exceptions, it is assumed to throw the base exception type. To avoid that, it has to expressly declare an empty list of exceptions.

There are plans to implement read and write tracking in Nimrod. In addition, an interesting feature is the capability to tag a proc or a method with some types. The meaning of those types is ascribed by the user; Nimrod doesn't appear to care! However, once specified, these tagged types are tracked by the compiler analogously to how exceptions are tracked. Thus, it provides an expressive mechanism to introduce user-defined effect types as long as they behave similarly to exceptions.

Evolution of My Thoughts

The numerous projects that worked on shaped the development of my own thoughts on effects. Apart from working on huge assembler, COBOL, PL/I and Rexx code bases on IBM mainframes, I worked on large projects that used C, C++, Java, Python, Ruby, etc. in a wide variety of application domains. Particular combinations of application domains and languages sometimes led to specific realisations.

Tracking Effects

I believe that effects tracking can be effectively implemented in both statically-typed languages and dynamically-typed ones. Type systems for effects appear to be orthogonal to those for values. Accordingly, the following discussion does not distinguish between the static vs. dynamic nature of types for values. Similarly, it does not distinguish between object-oriented and non-object-oriented languages. It does, on the other hand, assume that there is an ahead-of-time or just-in-time compilation phase — i.e. parsing the source should not result in an AST that is directly executed immediately.

Compiler-Defined Effects

An analysis of the program by the compiler is necessary for any effects system to be useful. The signature of each function or method in the program has to be verified against inferred effects of that function or method. All deviations have to be marked as errors, and the compiler should refuse to compile such. Effects should be annotated as a possible combination of:

mutates
mutates object state,
mutates_params
mutates one or more input parameters passed by reference,
reads
reads input from the world: heap, message queues, files, network, etc.,
writes
writes output to the world: heap, message queues, files, network, etc.,
tainted
invokes untrusted external code,
recursive
may not return due to self recursion,
i_recursive
may not return due to mutual or more indirect recursion, and
throws
may throw one or more exceptions.

A function or method with none of the above effects is considered pure: it is a mathematical function!

User-Defined Effects

User-defined effects are second class citizens. They are tracked like compiler-defined ones, but the compiler itself cannot relate to the meaning of such effects.

Propagation of Effects

throws and user-defined effects are mutable. It should be possible to handle them, and stop their propagation. Whether the resolution of such handled exceptions is nominal, involves subtyping, etc., is dependent on the type system of the language. Except for those, all other effects are fundamentally transitive in nature. Each effect propagates up the call hierarchy from where it occurs.

This mandates run-time compilation in the cases of languages supporting fully separate compilation of modules/packages/…. Cross-boundary passing of lambdas and methods to higher-order functions and methods necessitates dynamic compiler checks at run time. Violations of effects guarantees should lead to a designated run-time exception that cannot be handled.

This can yet be avoided should it be possible to perform a whole-program analysis upon dynamic linking. But, may be that opens a different Pandora's box!


[1] At that time, I could not comprehend the machinery behind those. Not that I fully comprehend it now, too; my current understanding is only marginally better!

[2] http://nimrod-lang.org/manual.html#effect-system

2014-01-08

Another go at Go ... failed!

After a considerable gap, a gave Go another go!

The Problem

As part of a consulting engagement, I accepted a project to develop some statistical inference models in the area of drug (medicine) repositioning. Input data comprises three sets of associations: (i) between drugs and adverse effects, (ii) between drugs and diseases, and (iii) between drugs and targets (proteins). Using drugs as hub elements, associations are inferred between the other three kinds of elements, pair-wise.

The actual statistics computed vary from simple measures such as sensitivity (e.g. how sensitive is a given drug to a set of query targets?) and clustering coefficients of the multi-mode graph, to construction of rather complex confusion matrices, computation of measures such as Matthews Correlation Coefficient, to construction of generalised profile vectors for drugs, diseases, etc. Accordingly, the computational intensity varies considerably across parts of the models.

For the size of the test subset of input data, the in-memory graph of direct and transitive associations currently has about 15,000 vertices and over 14,000,000 edges. This is expected to grow by two orders of magnitude (or more) when the full data set is used for input.

Programming Language

I had some temptation initially to prototype the first model (or two) in a language like Ruby. Giving the volume of data its due weight though, I decided to use Ruby for ad hoc validation of parts of the computations, with coding proper happening in a faster, compiled language. I have been using Java for most of my work (both open source as well as for clients). However, considering the fact that statistics instances are write-only, I hoped that Go could help me make the computations parallel easily[1].

My choice of Go caused some discomfort on the part of my client's programmers, since they have to maintain the code down the road. No serious objections were raised nevertheless. So, I went ahead and developed the first three models in Go.

Practical Issues With Go

The Internet is abuzz with success stories involving Go; there isn't an additional perspective that I can add! The following are factors, in no particular order, that inhibited my productivity as I worked on the project.

No Set in the Language

Through (almost) every hour of this project, I found myself needing an efficient implementation of a set data structure. Go does not have a built-in set; it has arrays, slices and maps (hash tables). And, Go lacks generics. Consequently, whichever generic data structure is not provided by the compiler can not be implemented in a library. I ended up using maps as sets. Everyone who does that realises the pain involved, sooner than later. Maps provide uniqueness of keys, but I needed sets for their set-like properties: being able to do minus, union, intersection, etc. I had to code those in-line every time. I have seen several people argue vehemently (even arrogantly) in golang-nuts that it costs just a few lines each time, and that it makes the code clearer. Nothing could be further from truth. In-lining those operations has only reduced readability and obscured my intent. I had to consciously train my eyes to recognise those blocks to mean union, intersection, etc. They also were very inconvenient when trying different sequences of computations for better efficiency, since a quick glance never sufficed!

Also, I found the performance of Go maps wanting. Profiling showed that get operations were consuming a good percentage of the total running time. Of course, several of those get operations are actually to check for the presence of a key.

No BitSet in the Standard Library

Since the performance of maps was dragging the computations back, I investigated the possibility of changing the algorithms to work with bit sets. However, there is no BitSet or BitArray in Go's standard library. I found two packages in the community: one on code.google.com and the other on github.com. I selected the former both because it performed better and provided a convenient iteration through only the bits set to true. Mind you, the data is mostly sparse, and hence both these were desirable characteristics.

Incidentally, both the bit set packages have varying performance. I could not determine the sources of those variations, since I could not easily construct test data to reproduce them on a small scale. A well-tested, high performance bit set in the standard library would have helped greatly.

Generics, or Their Absence

The general attitude in Go community towards generics seems to have degenerated into one consisting of a mix of disgust and condescension, unfortunately. Well-made cases that illustrate problems best served by generics, are being dismissed with such impudence and temerity as to cause repulsion. That Russ Cox' original formulation of the now-famous tri-lemma is incomplete at best has not sunk in despite four years of discussions. Enough said!

In my particular case, I have six sets of computations that differ in:

  • types of input data elements held in the containers, and upon which the computations are performed (a unique combination of three types for each pair, to be precise),
  • user-specified values for various algorithmic parameters for a given combination of element types,
  • minor computational steps and
  • types (and instances) of containers into which the results aggregate.

These differences meant that I could not write common template code that could be used to generate six versions using extra-language tools (as inconvenient as that already is). The amount of boiler-plate needed externally to handle the differences very quickly became both too much and too confusing. Eventually, I resorted to six fully-specialised versions each of data holders, algorithms and results containers, just for manageability of the code.

This had an undesirable side effect, though: now, each change to any of the core containers or computations had to be manually propagated to all the corresponding remaining versions. It soon led to a disinclination on my part to quickly iterate through alternative model formulations, since the overhead of trying new formulations was non-trivial.

Poor Performance

This was simply unexpected! With fully-specialised versions of graph nodes, edges, computations and results containers, I was expecting very good performance. Initially, it was not very good. In single-threaded mode, a complete run of three models on the test set of data took about 9 minutes 25 seconds. I re-examined various computations. I eliminated redundant checks in some paths, combined two passes into one at the expense of more memory, pre-identified query sets so that the full sets need not be iterated over, etc. At the end of all that, in single-threaded mode, a complete run of three models on the test set of data took about 2 minutes 40 seconds. For a while, I thought that I had squeezed it to the maximum extent. And so thought my client, too! More on that later.

Enhancement Requests

At that point, my client requested for three enhancements, two of which affected all the six + six versions of the models. I ploughed through the first change and propagated it through the other eleven specialised versions. I had a full taste of what was to come, though, when I was hit with the realisation that I was yet working on Phase 1 of the project, which had seven proposed phases in all!

Back to Java!

I took a break of one full day, and did a hard review of the code (and my situation, of course). I quickly identified three major areas where generics and (inheritance-based) polymorphism would have presented a much more pleasant solution. I had already spent 11 weeks on the project, the bulk of that going into developing and evaluating the statistical models. With the models now ready, I estimated that a re-write in Java would cost me about 10 working days. I decided to take the plunge.

The full re-write in Java took 8 working days. The ease with which I could model the generic data containers and results containers was quite expected. Java's BitSet class was of tremendous help. I had some trepidation about the algorithmic parts. However, they turned out to be easier than I anticipated! I made the computations themselves parts of formally-typed abstract classes, with the concrete parts such as substitution of actual types, the user-specified parameters and minor variations implemented by the subclasses. Conceptually, it was clear and clean: the base computations were easy to follow in the abstract classes. The overrides were clearly marked so, and were quite pointed.

Naturally, I expected a reduction in the size of the code base; I was not sure by how much, though. The actual reduction was by about 40%. This was nice, since it came with the benefit of more manageable code.

The most unexpected outcome concerned performance: a complete run of the three models on the test set of data now took about 30 seconds! My first suspicion was that something went so wrong as to cause a premature (but legal) exit somewhere. However, the output matched what was produced by the Go version (thanks Ruby), so that could not have been true. I re-ran the program several times, since it sounded too good to be true. Each time, the run completed in about 30 seconds.

I was left scratching my head. My puzzlement continued for a while, before I noticed something: the CPU utilisation reported by /usr/bin/time was around 370-380%! I was now totally stumped. conky showed that all processor cores were indeed being used. How could that be? The program was very much single-threaded.

After some thought and Googling, I saw a few factors that potentially enabled a utilisation of multiple cores.

  • All the input data classes were final.
  • All the results classes were final, with all of their members being final too.
  • All algorithm subclasses were final.
  • All data containers (masters), the multi-mode graph itself, and all results containers had only insert and look-up operations performed on them. None had a delete operation.

Effectively, almost all of the code involved only final classes. And, all operations were append-only. The compiler may have noticed those; the run-time must have noticed those. I still do not know what is going on inside the JRE as the program runs, but I am truly amazed by its capabilities! Needless to say, I am quite happy with the outcome, too!

Update: As several responses (both here and on Hacker News) stated, Java's multi-threaded GC appears to be primary reason for the utilisation of all the processor cores.

Conclusions

  • If your problem domain involves patterns that benefit from type parameterisation or[2] polymorphism that is easily achievable through inheritance, Go is a poor choice.
  • If you find your Go code evolving into having few interfaces but many higher-order functions (or methods) that resort to frequent type assertions, Go is a poor choice.
  • Go runtime can learn a trick or two from JRE 7 as regards performance.

These may seem obvious to more informed people; but to me, it was some enlightenment!


[1] I tried Haskell and Elixir as candidates, but nested data holders with multiply circular references appear to be problematic to deal with in functional languages. Immutable data presents interesting challenges when it comes to cyclic graphs! The solutions suggested by the respective communities involved considerable boiler-plate. More importantly, the resulting code lost direct correspondence with the problem's structural elements. Eventually, I abandoned that approach.

[2] Not an exclusive or.

2013-10-16

Captions on Indian trucks - an unexpected lesson

Several years ago, when I lived in Bengaluru, my company's offices used to be in Electronics City (for a few years). Owing to the distance and the disgusting volume of traffic, I used to leave for the office rather early, around 07:00.

In those early hours, long-distance trucks were permitted to travel through the city. As I overtook them, I used to read the captions written on those trucks. A particular caption was very, very common on trucks coming from the North: ``burI nazar vAlE, tErA muH kAlA" (roughly ``oh you who cast an evil eye on me, your face shall become blackened").

After a while, I grew so familiar with it, that I usually read only the first word before turning my attention to the next truck.

On a particular day, I had this truck right ahead of me, when we stopped at a traffic signal. The caption began with the usual ``burI", and I almost turned in another direction, but something pulled my attention back. The caption read: ``burI nazar vAlE, tErA bhI bhalA hO" (roughly ``oh you who cast an evil eye on me, I wish you well in spite of that").

I was stunned! It took me a while to digest that. Am I equal to that spirit? I don't think so. Nonetheless, it has had a rather mysterious affect on my thinking!

2013-08-14

Ring Detection - 1

In this two-part series, we look at how Ojus Chemistry Toolkit (OCT) currently implements ring detection. In this part, detection of rings and ring systems is described.

Preparation

N.B. OCT's molecule reader converts any hydrogen atoms that it encounters into implicit hydrogens, i.e., the list of atoms and bonds in an OCT molecule never contains a hydrogen atom. Similarly, the number of hydrogens attached to an atom is determined looking at the atom's valence and any charge specified for it.

N.B. The Frerejacque value of the molecule is checked to see if there are any cycles at all, before the following procedure is employed.

When we are interested in finding cycles (and only cycles), we should not look at open branches — those that contain only leaves. Accordingly, we remove them. In reality, all of this computation happens on temporary data structures, not the actual molecule itself.

Removing Terminal Chains

Remember terminal atoms from this old article? Quickly, a terminal atom is one which has a single neighbour. We remove all such atoms. However, removing a terminal atom could make its sole neighbour a terminal atom itself! We have to remove it, too. But, then, removing it … ad infinitum. Here is the pseudocode.

    var removing := true.

    while (removing) {
        removing := false.

        for each atom `a` {
            if `a` is a terminal {
                remove `a`.
                removing := true.
            }
        }
    }

Note that removing an atom entails removing all the bonds in which it participates, etc. Thus, by the time the outer loop exits, all terminal chains will have been removed.

How Many Rings?

One Ring or More?

At this point, if all the atoms have exactly two neighbours, there is only one ring in the molecule. Why? More than one ring implies that a second ring is either fused to the first one, or is independent of it, but connected through a linking chain of atoms. In either case, there should be at least one atom that is a junction — it should have at least three bonds.

Cycle Detection

Case of One Ring

In the case of the molecule having only one ring, we mark that ring as the sole ring. We also create a single ring system with that sole ring. And, we are done!

Case of Multiple Rings

Detection of two or more rings present in the molecule is (rather) evidently more complex than that of a single ring. Over the last decade, or so, I have written four or five distinct algorithms to detect multiple rings in a molecule. All of them employed a recursive depth-first approach. The recursive approach made certain parts of the algorithm easier and clearer, while complicating certain others.

This time, though, I chose to employ a breadth-first approach. The outline is as follows.

    var candidates := new list of paths.
    var path := new list of atoms.

    var a := the first non-junction atom in the molecule.
    if no such atom exists {
        a := the first atom in the molecule.
    }
    path := `path` with `a` appended.
    candidates := `candidates` with `path` appended.

    while (`candidates` is not empty) {
        var test-path := fetch the first path in `candidates`.
        try-path `test-path`.
    }

As you can see, I have a queue into which I inject all candidate paths. A candidate path is potentially incomplete, and may need to be extended by adding a neighbour of the last atom in it, should one such exist. But that is part of processing the path.

Process a Test Path

    var current := last atom in `test-path`.

    for each neighbour `nbr` of `current` {
        if `nbr` was visited already in `test-path` {
            var ring := path from previous occurrence of `nbr` to `current`.
            if `ring` is valid {
                record `ring`.
            }
            continue loop.
        }

        var new-path := `test-path` with `nbr` appended.
        candidates := `candidates` with `new-path` appended.
    }

Thus, we add each path extension to the queue of candidates, at each atom that we encounter. Now, given a candidate ring, how do we validate it? First, the easy case: if the candidate has only three atoms, then it is certainly a genuine ring. Good! But, what if it has more than three?

Preliminary Validation

    if size of `test-path` is 3 {
        answer `true`.
    }

    for each atom `a` in `test-path` {
        if more than 2 neighbours of `a` exist in `test-path` {
            
            answer `false`.
        }
    }

    
    answer `true`.

Ring System Detection

Once we have recorded all rings, we sort them by size. The next task is to partition this set of rings into equivalence classes of ring systems. Two given rings belong to the same ring system if they have at least one bond or at least one atom in common.

Partitioning

The actual task of partitioning the set of rings is quite straightforward.

    var ring-systems := new list.

    outer:
    for each recorded ring `r` {
        for each ring system `rs` {
            if `r` and `rs` share a bond or
               `r` and `rs` share an atom {
                mark `r` as belonging to `rs`.
                rs := `rs` with `r` appended.
                continue `outer`.
            }
        }

        var new-rs := new ring system.
        new-rs := `new-rs` with `r` appended.
        ring-systems := `ring-systems` with `new-rs` appended.
    }

Conclusion

We have, now, identified the rings in the molecule, and have partitioned the set into ring systems. However, this set could contain potentially spurious rings. Eliminating the false candidates is a surprisingly more difficult procedure. We shall look into it in the next post!

2013-04-30

C++11's strongly-typed enums

C++11 provides strongly-typed enums. These new enums are no longer simple sets of integers; they are classes in their own right. And, their instances are objects, too!

In the following example, we define several enumerated values for possible stereo configurations of a chemical bond.

enum class BondStereo : int8_t
{
    NONE
    , UP 
    , DOWN 
    , UP_OR_DOWN 
    , E 
    , Z 
    , CIS 
    , TRANS 
    , UNSPECIFIED 
};

Note that the above enum class BondStereo has int8_t as its underlying data type. C++11 allows you to specify the precise integral data type that should be used. In turn, that allows us to choose specific types to best suit the required ranges of the enumerated values.

However, with this felicity comes an inconvenience. Since the instances of these enums are no longer simple integers, they cannot be used as such! The following is a simple and generic way of obtaining the underlying integral value of a given instance.

template <typename E>
auto enumValue(const E e) -> typename std::underlying_type<E>::type
{
    return static_cast<typename std::underlying_type<E>::type>(e);
}

Note the use of auto and the trailing return type declaration. This is needed, since we do not know in advance the exact underlying type of a given instance.

Great, you say, but which headers should I include to be able to use this functionality? Well, that is left as an exercise :-) Enjoy!

2013-04-23

Operational Disaster

My return to blogging did not turn out to be a real return, evidently! Here is an update after several months, again!

Operational Disaster

An unprecedented, peculiar and curious sequence of quick events took place in the middle of March. I was investigating the possible use of VirtualBox image as a distribution mechanism for my organic synthesis planning product. My scientist created a CentOS 6.4 image with the program and its dependencies, and shared it. I made a copy of it, and imported the same into VirtualBox in my Windows 8 computer. It worked well, as intended. Thus far, the experiment was successful!

To ease testing the program in the VirtualBox environment, I mounted my primary data virtual disk in the CentOS image. The testing itself was successful. It appeared as if this mechanism was proving to be technologically sound and convenient as well.

Disaster Strikes!

Some more testing later, I decided to re-test the entire procedure. I removed the CentOS image from VirtualBox. However, it complained of an existing image when I tried to import the appliance the next time. Since the old image was no longer needed, I asked VirtualBox to delete it from the disk. It did so!

I ran through the procedure again, with the same initial success. I was quite satisfied with this approach. Then, exactly as before, I tried mounting my primary data virtual disk in this new CentOS image. Navigating to the drive where it was located, I was astonished to not find it there! Concerned, I opened Windows explorer, and navigated to the said drive. Indeed, the .vmdk file was gone!

What Had Happened?

As of the time that I deleted the CentOS image, the mounted external virtual disk had an entry in its /etc/fstab. So, when VirtualBox deleted the image, it deleted the mounted disk's file as well! VirtualBox claims that mounted disks that are used by other images are not deleted. How did this deletion happen, then?

The answer is straight-forward: the mounted disk was not used by any other VirtualBox image! It was the home partition of another Linux image, but that was a VMware image! So, VirtualBox was indeed true to its promise. But, my data was lost!

More Disaster!

I was distressed at having lost my home partition in this unforeseen manner. The lost virtual disk was my primary data drive. The host Windows home folder did not have anything of consequence, except in the Downloads folder. Fortunately, I had a backup from January. It was stored in a separate NTFS drive in the laptop, as well as an external USB disk.

So, losing no time, I proceeded to restore the contents from that backup. It was an encrypted backup. Since I always name my top-level folders with a common prefix, I asked for the files to be restored to my Windows home folder. Towards the very end of the unpacking exercise, something went wrong, and the system froze! I gave it several minutes to recover, but to no avail. There was no disk activity; no sign of life; even the Caps Lock key stopped responding after a while. I waited patiently ... with my fingers crossed, for over half-an-hour. The computer had really frozen!

Disappointed, I switched the power off. With considerable trepidation, I switched it on a few seconds later. Truly enough, the system froze while booting! I felt really let down. I tried restarting the computer a few times, with the same inevitable result — the login screen wouldn't appear. My entire Windows had become unusable! Two strokes in a row.

Rescue Attempt

I used my wife's computer to read about rescuing Windows. There was a Windows rescue partition (drive) in the hard disk. I tried to restore Windows using that rescue partition, following the instructions that I found in Microsoft's Web site. Alas! My earlier impulse upgrade to Windows 8 haunted me unpleasantly there. The rescue image was that of Windows 7, which came pre-installed with the HP hardware. When I attempted a rescue, it kept informing me that I had a more recent version of Windows installed, and that it couldn't rescue it!

For an hour, or so, I was quite dumbstruck. But then, life has to go on, of course! I contemplated possible courses of action. And finally chose what I knew best.

Go The Linux Way!

I burned Kubuntu 12.04 LTS to a DVD, and booted into the Live image. After making sure that all essential hardware worked as expected, I installed it to the hard disk as the only operating system. During the installation, I opted for the proprietary firmware for the Broadcom 4313GN wireless card, etc., to be downloaded and installed. Everything went smoothly. The only irritating aspect was the download of over a dozen English language packs! Readers will remember that I was irritated by this in my old Debian (virtual) installation too.

Upon re-booting, I found – as expected – a few hundreds of updates. Accordingly, I allowed the system to update. Next, from the Kubuntu PPA, I upgraded KDE to 4.10.1. After verifying that the system was working properly, I restored the backup from the external USB disk. It Just Worked!

Laptop Power Management

Initially, the laptop battery powered the system for only about 2 hours. Windows (both 7 and 8) used to last between 4 and 5 hours on a single charge. I read several articles and blog posts on what all improve battery life in Linux. None of them improved the situation measurably! powertop showed a baseline discharge rate of over 23W when idling. That was disappointing!

My HP laptop has an integrated Intel 4000 HD graphics card and an nVidia GEFORCE 630M discrete graphics card. I realised that I had to try Bumblebee. Following the instructions in one of the Ubuntu fora, I installed Bumblebee. I installed the primus configuration rather than the optimus one.

Having read a few bad things about Bumblebee, I had a trepidation similar to that I had when I re-booted the frozen Windows system. Fortunately, though, Kubuntu booted normally, and to my great relief, Bumblebee worked. powertop showed a new baseline consumption rate of a little over 10W when idling! Now I get the same 4-5 hours of battery life on a single charge!

What Do I Not Like?

The power management is a little too eager. It puts the wireless interface to sleep every few minutes. For it to wake up takes several seconds upon next use. I have to keep staring at the screen until then, sometimes a little impatiently. These days, though, I use powertop to put the Broadcom 4313GN wireless card into bad state so that it is not put to sleep so aggressively.

What Do I Miss?

All of this is fine, of course, but do I miss anything? What I miss most is Google Drive native application. I usually do not sign on-line petitions, but I did sign the one at Drive4Linux. I was disappointed to find only about 1,500 signatures. Nevertheless, I signed it, and requested my G+ contacts to follow suit if they use Linux and Google Drive.

Other than the above, I have not felt any notable inconvenience or loss of functionality, so far! Thus, after a break of about five years, I have returned to running Linux natively in my primary computer!!

2012-12-27

Return to blogging!

After several months of silence, here I return to blogging! A few quick updates are in order, in no particular order.

Trivia

  • When the battery of my MacBook Pro began failing in May, I purchased a relatively low-end HP Pavilion dv6 7040TX pre-installed with Windows 7. I mostly like it. It generates very little heat. By contrast, the MacBook Pro is a mini heater for the winter. Another noticeable feature is battery life: I am getting about five hours of development time per charge. The only downside is the low screen resolution, which is 1366x768. In practice, though, it has proved to be adequate for my development needs.
  • It was a rare occasion when I surprised myself by impulsively upgrading the HP computer to Windows 8! My unfamiliarity with Windows was amply proved by the numerous devices and driver difficulties I encountered upon upgrading. Reading a related Microsoft Knowledge Base article revealed that there was an important step that I missed. [For the curious, we are supposed to uninstall and re-install the devices when upgrading in-place.]
  • VMware Player now offers OpenGL-based 3D support for Linux guests. Upon upgrading to the new version of Player, I realised promptly that Debian Wheezy had a problem that prevented it from recognising and utilising 3D devices. It appears as if Sid has this problem as well, since my experimental Aptosid image failed to turn on desktop effects.
  • Thus, I now run Linux Mint 14 KDE. [Of course, it is KDE!] It has been quite stable for my daily development needs (several Emacs windows, Eclipse 3.7 and several Konsole windows and tabs). This is in stark contrast to the frustrating experience with the Cinnamon version, which I downloaded first, mistaking it to be the KDE version. This demonstrates — yet again — why choice is so important, and why it underlies the philosophy of free and open source software!

Largely distracted months

I went through several months of non-work distractions. I am glad that those are nearing their respective conclusions. Not being able to concentrate on work can be really frustrating. More so if one's To Do list is long.

Experiments with languages

During these largely unproductive months, I studied a few languages, peripherally. Here is a summary.

Haskell

I had briefly looked at Haskell, in 2000. It looked so different, I promptly left it. Having gained a little more of functional thinking in the meantime, I decided to take another look at it. A good motivation was Hughes' influential paper "Why Functional Programming Matters". Some Haskell features are straight-forward: type classes, pattern matching, guards in functions, list comprehensions, etc. Some others are deep: higher-order functions, currying, lazy evaluation, etc. A few others go even deeper: functors, applicatives, monads, etc. Haskell demands considerable time and effort — not only the language proper, but the tool chain too. The syntax is layout-sensitive, and I could not even find a decent major mode for Emacs. The package tool called cabal repeatedly left my Haskell environment in an unstable state. Tooling is certainly a problem for the beginner, but otherwise, Haskell is a profound language that makes your thinking better!

Dart

Dart is a curious mix of the semantics of Smalltalk, the syntax of Java and JavaScript, and memory-isolated threads that communicate by sending messages. Added into this curious mix is a compile-time type system that does not affect the run-time object types! Mind you, Dart is strongly-typed. Even though there is a compile-time type system, it is optional and is primarily intended for better tooling, and the language itself is dynamically-typed. The types are carried by the objects, but the variables themselves are untyped. Dart's biggest promise is the ability to write scalable Web applications using a single language on both the server side and the client. The server side seems to present no problems, but the Web programming community is divided in its opinion on Dart's client side promise. The contention arises because Dart has its own virtual machine. Using the VM requires the user to install it as a plug-in in her browser. For those who do not want to use the VM, Dart comes with a cross-compiler that outputs equivalent JavaScript code.

D

I had known of the existence of D for several years, even through I never looked at it in detail. Reading a little about the history of D, I realised that it underwent a rather tumultuous adolescence. With D2, it appears to have entered adulthood. The motivation to look at D was an MSDN Channel 9 video of a presentation by Andre Alexandrescu. D was designed to be a better C++. Several of the design decisions behind it can be better appreciated if we hold that premise in mind. It has a simplified, more uniform syntax, garbage collection, a decent standard library and easier generics. It maintains the ability to directly call a C function in an external library, immediately making a vast set of C libraries accessible. Scoped errors and compile-time function evaluation are examples of D's interesting features. Another notable feature is the partitioning of the language features into safe, trusted and unsafe subsets, with the ability to verify that a module is completely safe, etc. D has good performance that is reasonable compared to that of C++.

Others

I also looked briefly at Erlang and Clojure. However, I did not spend enough time on them to be able to form an opinion.

2012-04-23

The changing face of urban Hyderabad

A few days ago, my family went shopping to the Ameerpet area of Hyderabad. We shopped for about an hour-and-a-half; the time was 17:00. My wife wanted to have some coffee (so did I, in fact). We could not find a place that served coffee in the immediate vicinity. We walked in the general direction of a few restaurants. Thus began an amazing hour of discovery!

We went into the first restaurant that we came across. We seated ourselves at the first table available. Presently, a waiter turned up. ``Two strong, hot coffees," I said. ``No, sir," he replied promptly, ``we don't serve coffee." I was surprised. We picked up the bags, and walked on.

At the next restaurant, we were cautious. We did not go as far as seating ourselves; rather, we waited for a waiter to approach us. ``Do you serve coffee?" I enquire. We get the same reply, ``No." I was more surprised. We walked on.

The third restaurant was a familiar one. It has been around for over twenty five years. The last I had visited it, it used to serve coffee, tea and snacks. However, that was several years ago. My four-year-old son complained of hunger by this time. He wanted a pesarattu (a special Telugu dish that is a kind of thin-and-large pancake). I felt that there was a high probability that this restaurant would serve both pesarattu and coffee. So, we climbed up a floor to the restaurant. ``No, sir. We used to serve South Indian food until about six months ago. We no longer do. Now, we serve Mughalai, Tandoori and Chinese!" I was mildly astonished. My wife and I sighed simultaneously, and we walked on.

My son was very disappointed. As we walked, he was eagerly watching for another restaurant. This time, we had to walk quite some distance before we came across another. Its look made it clear that it was a very non-vegetarian-oriented restaurant. We did not bother to walk in. My wife and I had a quick consultation, and decided to turn around, pass the shopping area, and try in the other direction.

My son's disappointment grew with each passing twenty five metres, or so. He started getting petulant. We negotiated the distance back to the shopping area with some difficulty, coaxing my son along the way. As we walked past that, we soon realised that there were no restaurants within sight! By this time, we had spent close to an hour covering a total of a little over a kilometre, without finding a place that served South Indian snacks and coffee! We resigned, got into the car, and drove back home.

The episode left me wondering, however, about the dramatic transformation that Hyderabad has undergone in the last couple of decades. It is very difficult these days to find decent (or even semi-decent) restaurants that serve Telugu vegetarian food. I have noticed the same trend in Bengaluru too, particularly for supper. A large number of restaurants have colluded to systematically eliminate South Indian menus. A key reason is that Mughalai, Tandoori, Chinese, etc. food is much more expensive. The restaurants earn significantly more per table-hour when they serve them. The constant in-flow of North Indians into Hyderabad has only made it easier for the restaurants to switch over.

Another dimension that has seeped in over the years is that of western fast food (pizzas, burgers, etc.). In the name of maintaining international quality at an international price, the western chains charge ridiculously high prices (by Indian standards) for such fast food. We have to remember, however, that economic liberalisation has placed sudden money and means in the hands of an entire new crop of employees and entrepreneurs (and their pizzas-and-potato-chips brats). India has, consequently, been witnessing rapid changes in urban social patterns. The new-found affluence has resulted in a large number of families dining out several times a week. And, in the name of novelty, a vast majority of them patronise the more expensive varieties. The smaller restaurants, obviously, do not wish to let the opportunity slip by. We see, thus, a steady decline in the number of restaurants serving native food.

Craving for the new often dislodges the old! In this instance, Telugu (South Indian, in general) food and beverages are the casualty!

2012-04-07

Graphs and molecules - 2

Note: This post utilises MathJax to display mathematical notation. It may take a second or two to load, interpret and render; please wait!

If you have not read the previous post in this series, please read it first here.

Ordering

The notion of ordering is very intuitive in the context of natural numbers. Indeed, when we learn natural numbers, their representation \(\bbox[1pt]{\{1, 2, 3, 4, \ldots\}}{}\) itself imprints an ordering relationship in our minds. Soon enough, we learn to assign a sense of relative magnitude to those numbers: 4 is larger than 2, etc. This concept extends naturally to negative numbers and rational numbers too.

A little rigour

Suppose that we represent the ordering relationship between two elements of a set using the symbol \(\le\). Then, we can define the properties that a set \(S\) should satisfy for it to be ordered.

  • Reflexivity: \(a \le a\forall a \in S\)
  • Antisymmetry: if \(a \le b\) and \(b \le a\), then \(a = b\ \forall a, b \in S\)
  • Transitivity: if \(a \le b\) and \(b \le c\), then \(a \le c\ \forall a, b, c \in S\)

We can readily see that integers and rational numbers satisfy the above properties. Accordingly, we say that integers and rational numbers are ordered, if we assign the meaning smaller than or equal to to the ordering relationship \(\le\).

In fact, we can see that integers and rational numbers also satisfy an additional property.

  • Totality: \(a \le b\) or \(b \le a\ \forall a, b \in S\)

A distinction

Totality is a stricter requirement than the preceding three. It mandates that an ordering relationship exist between any and every pair of elements of the set. While reflexivity is easy enough to comprehend, the next two specify the conditions that must hold if the elements concerned do obey an ordering relationship.

It is easy to think of sets that satisfy the former three properties, but without satisfying the last. As an example, let us consider the set \(X = \{1, 2, 3\}\). Now, let us construct a set of some of its subsets \(S = \{\{2\}, \{1, 2\}, \{2, 3\}, \{1, 2, 3\}\}\). Let us define the ordering relationship \(\le\) to mean subset of represented by \(\subseteq\). Exercise: verify that the first three properties hold in \(S\).

We see that \(\{1, 2\}\) and \(\{2, 3\}\) are elements of \(S\), but neither is a subset of the other.

Therefore, mathematicians distinguish sets satisfying only the first three from those satisfying all the four. The former are said to have partial ordering, and they are sometimes called posets or partially-ordered sets. The latter are said to have total ordering.

More ordering

Now, let us expand the discussion to include irrational numbers. Do our definitions apply? There is an immediate difficulty: irrational numbers have non-terminating decimal parts! How do we compare two such numbers? How should we define the ordering relationship? The integral part is trivial; it is the decimal part that presents the difficulty.

Sequence comparison

In order to be able to deal with irrational numbers, we have to introduce an additional notion — sequences. A sequence is a set (finite or infinite) where the relative positions of the elements matter. Another distinction is that elements can repeat, occurring at multiple places. The number of elements in a sequence, if it is finite, is called its length. Thus, sequences can be used to represent the decimal parts of irrational numbers.

Let \(X = \{x_1, x_2, x_3, \ldots\}\) and \(Y = \{y_1, y_2, y_3, \ldots\}\) be two sequences. We can define an ordering relationship between sequences as follows. We say \(X \le Y\) if one of the following holds.

  • \(X\) is finite with a length \(n\), and \(x_i = y_i\ \forall i \le n\) and \(y_{n+1}\) exists.
  • \(X\) and \(Y\) are infinite, and \(\exists\ n\) such that \(x_i = y_i\ \forall i \le n\), and \(x_{n+1} \le y_{n+1}\).

Armed with the above definition, we can readily see that we can compare two irrational numbers — in fact, any two sequences. Exercise: verify this claim by comparing two irrational numbers and two sequences of non-numerical elements!

Bottom and top elements

If a set \(S\) of sequences has an element \(b\) such that \(b \le s\ \forall s \in S\), the element \(b\) is called the bottom element of the set. The element t that we get if we replace \(\le\) with \(\ge\) is called the top element of the set. The bottom element and the top are unique in a given set.

In our first example above, \(\{2\}\) is the bottom element of the set, while \(\{1, 2, 3\}\) is the top. However, it is important to understand that bottom and top elements may not exist in a given set of sequences. Exercise: think of one such set.

Minimal and maximal elements

When a set does not have a bottom element, it is yet possible for it to have minimal elements. For an element \(m\) to be a minimal element of the set \(S\), \(s \le m \implies s = m\) should hold. If we replace \(\le\) with \(\ge\), we get maximal elements.

Minimal and maximal elements are difficult to establish (and, sometimes, even understand) in the context of infinite sets or complex ordering relationships. The same applies to bottom and top elements, too.

Conclusion

You may have begun wondering if the title of this post was set by mistake. On the contrary, these concepts are very important to understand before we tackle canonical representation of molecules, ring systems in molecules, etc., which we shall encounter in future posts.