Monthly Archives: March 2014

The less is more fallacy.

Programming language designers can differentiate their language in a couple of ways.

One is by adding new features, the other is by removing old ones.

Often it is thought that the inclusion of a feature can promote a deeply bad coding practice. A solution is to omit it, but this comes at the cost of restricting its deep powers.

There are many examples of this.

  • Languages such as as JavaScript and java omit the goto.
  • In Java (6 at least), objects can only be passed by value. Also, objects must have type annotations.
  • In languages like Ruby all primitive values are still boxed.
  • In Haskell you can’t use global variables, not easily at least. But you could forgive it as it is considered an academic language.

Language designers seem to think they should be an evangelist for good programming techniques and so they omit important basic features that would be easy for them to implement.

It is in my opinion that this is over sold. I think there are better alternatives to this machiavellian dictatorship that language designers like to impose. I think if language designers allowed such use of features then more education would arise explaining when to use them. There could be flags applied to source code where certain features are disallowed. Linters could pick up this sort of thing as well.

Thankfully there has been a sort of renaissance about this sort of thing.

  • Google language go where one can use gotos’s with more modern features such as garbage collection.
  • Mozilla’s Rust language supports pointers where you can reassign values.
  • Revisions to C++ are giving it more sophisticated features like type inference.

In my opinion “more is more” is a much undervalued premise. After a programmer has learned how to program, they just want to write a fast powerful application easily. Languages that deny them these basics will encounter a sore exodus of users as they find themselves falling into the wayside of obscurity.



This article I’m going to try and claim why the clunky old lower levels like Java and to a greater extent C are still popular.

I think the biggest problem, which I know is a real stick in the mud for higher level programmers, is speed.

“But my application doesn’t require speed”,”I find it’s IO bound” you say.

While your application does not require speed now, it may in future scenarios. An algorithm written in a low language can be used in much more demanding tasks. Calculating a winning poker hand may be small work to compute, calculating the chances of which particular moves to win are a much heavier computation.

You might want to run software on a machine which is running less powerful hardware than your own. There also seems to be a trend in porting more sophisticated software to devices with less powerful machines. Smart phones are an example of this.

“I will just scale the hardware” you say, if you have the money that sounds like a great solution, but what happens when the money runs out as economics will ensure. Some algorithms, even with the most powerful hardware available may not run fast enough for your needs.

“I will adapt my software when necessary”. There are a few techniques to this. Advanced features of a language could be substituted with more the more primitive operations that the language provides. One could refactor out the slow parts of the code and write them in a lower level language via remote procedure call or foreign function interface.A more sophisticated algorithm could be employed which uses of a special data structure or calculation technique.

All of these sounds like a good idea, but whereas you had one problem before, you know have two, thus proving the point I was trying to make. Sometimes you can get away in lower level languages just with brute force algorithms. See some project euler solutions for an example of this.

Last post I said that high level languages were more portable. While this is true in the case of the source code this is not always so true of the application more generally.

Both the developers and user still often have the task of installing the latest flash or JVM or Dot net framework or particular browser for example the higher level general purpose language. Generally, the more high level it the language is the more esoteric it is will be and the less likely the dependencies will already be installed.

Building software, that is the packing and distribution of software is in my opinion, highly underrated, and it’s for this reason along with others that a program written using the win32 api or libc or java have been so lucrative. The better job it does at being easier to move the application to another machine, often the worse the software will perform. Programming languages that utilise virtual machines are a good example of this.The more high level constructs the language supports usually means the language will be slower. Ruby, although known really flexible, is also known for being particularly slow.

Code written in low level languages is generally less esoteric and commonly used. There is another benefit to this. A more common language means more development resources. I guess a counter to this might be CPAN which currently has over 130,000 modules, but I wonder as to what portion of those packages are quality given how many people are using them today.

If you go up one step further to something like Java, you get massive of very high quality jars such as SAX and swing and JDBC available to you.

The more esoteric higher level languages are usually more radical in there software support as well. If you want to install a python library there are now many variants for different versions you might want.If you do a search for a python for example in your package manager of choice, you may be confronted with question of which version? 2.4, 2.5, 2.6, 2.7, 2.8 ,3 etc. lower level languages have these sorts of problems too but my point is that they are often very comparable to there higher level counterparts.

I find high level languages like php and python usually do well, until the language fragments.

This creates a very interesting what I will call “honey moon” phase of a programming language.The feeling the programmer gets when they are using a language that has just come out and think it will never change.

So there is my article on why high level languages are not as practical as they might seem at first. Many of the things I’ve mentioned have just been circumstantial and not really to do with the languages fundamentally. Still, this has been a bit of rude awakening for me which I wanted to share. The whole idea summarizes itself in what I will call “The rule of high level code”.

“Any sufficiently practical application written in high level will be rewritten in low level.”

The historic progression of programming languages.

Comforting it must feel that we live in an age where we can program in the languages we do. A glance at the history of programming languages shows a nice trend. Programming languages are getting “higher level”.

It is then appealing to assign this increase in sophistication with an increased education in society or the coming of better software language design.

One remarkable smoking gun is the invention of the language lisp. Lisp is a language much higher level than most of the languages used today and one who hasn’t looked it up may be interested to know that Lisp was invented in 1958. This is 25 years before even C++. A glance on the tiobe index shows other revealing facts. C is the most popular language. Yes that’s right, C. C from 1972, which is also interestingly, 14 years after lisp. This begs the question. What is really spurring this increase in high level languages?

My analysis has come to the conclusion.


This, I will try and justify using the languages used today and why they came about when they did.

1. Binary Op Codes. People coding using ones and zeros. There was a specific code for each instruction say, move memory or add numbers. The problem was these codes would not work between machines. Thus a new language was born so that code would require less rewriting to move between machines of a similar architecture. ASM.

2. ASM. While most machines had similar instruction types. Even a small difference in architecture, like the number of registers could render the task of porting a program cumbersome. A language was needed that didn’t care how many registers you had. That didn’t care how high level your OP codes were. A language that professed a standard in byte length that is (8 bits to a byte).

This language was called C.

Thus, an operating system could be created that would truly run on different architectures. You might think that according to my theory that an ASM could simply be restricted to very low specs. For example only supporting the most common operations like INC and DEC operation, and only supporting two registers. That solution however is in a different domain. Newer architectures were being released that provided increased hardware capacity. People wanted to write software that would still work on these new platforms at a good speed. As you can see, it looks like I’m stretching the definition of portability to include hardware support. When you drop support for the knowledge of the number of registers, for greater hardware portability, it doesn’t matter what register you would have liked it to go in. This increase in the high level now comes directly from the dropping of hardware specifics like that in the progression from binary to ASM.

3. C++. When a programmer hears C++, they usually think of the pillars object oriented design.(Ignore these terms you don’t know them already) Encapsulation, Polymorphism, Inheritance, Abstraction. I’m going to tell you to forget that and focus on one thing. In C memory is obtained from the operating system by an operating system specific call, often called malloc. C++ introduces a new keyword, coincidentally for this article, called “new”. Allocating memory is a very common operation in large scale programming. Given a discrepancy in the library used to allocate it, you now how unportable source. Make it part of the language and you have increased portability control. Objective C is apples variant of C.

4. Java. Java takes the issue of memory in c++ one step further. You don’t need to allocate or free it yourself instead there is program which does that for you. Also another benefit is that it works with a standard library called the JRE that is pretty compatible between operating systems and does not even require re-compilation for different operating systems let alone architectures. A Windows binary in C++ won’t work on linux as the have different formats. See PE and ELF for more info. C# is windows branded java if you want to know what thats about.

5. Scripting languages. Shell, Perl, PHP and javascript are all scripting languages and by some definitions not even programming languages which is what this article is about. They are also less arguably general purpose in there application.

Even so, I would still like to explain them. First, no memory management is required, but still, java still has that. They also come with standard libraries, but that’s not even a language issue and again java also has that. They don’t require compilation. So why not use a java interpreter such as bean script? The difference with these languages is that they don’t require static types and this is there abstraction.

Type information is less lucrative to a language which is being interpreted line by line. The fact that the source is executed in its raw form makes it more portable in the loose sense the the developers themselves does not have to compile them.

Where does this leave the future of software you might ask.

First I’d like to digress and try and justify the reasoning behind my reasoning. It seems that if the world wants mountains to be moved it requires human power. The bottle neck for this human power is not application level abstractions but platform level abstractions.

Since the proliferation invention of the web, we live in an unprecedented stage in humanity, where a person on almost any computer be it Mac, Windows or Linux can execute an application utilizing things such as maths, text, graphics(more coming as the web progresses) and run it on almost any other PC, notebook tablet or handheld.

I think the societies focus will on the attractive goal of making software portable between minds is only now starting to come into fruition. Functional programming languages are allowing coders to create more modular code which in turn is more portable within their applications and language.

The language of the future may be something akin to lambda calculus. there will be tools for converting this side effect free language to other styles and back which will give developers of different abilities the ability to contribute, read and share code. The notion of a language may even change given the revealed isomorphism between them. If you want to read code as if it was written in java, it might be something as simple as changing the syntax style in your editor.

10 commandments of web dev.

Here ye, Here ye. I haveth the 10 commandments of web development which should under no circumstances be broken.

  1. Thall shall not inline css
  2. Thall shall not inline javascript.
  3. Thall shall place all css and javascript in their own respective files.
  4. Thall shall only reference these css and javascript files only in the <head> tag
  5. Thall shall not use tables where css divs could be used instead.
  6. Thall shall not make use of negative margins.
  7. Thall shall not make use of deprecated html attributes such as bgcolor or alink
  8. Thall shall not make use of global variables other than to store the namespace for your codebase
  9. Thall shall not construct html in javascript using string manipulation.
  10. Thall shall not encode data in anything other than xml.

Dear Google,

It has come to my attention that all of these rules have been broken. I strongly recommend you refactor all of your code to comply with these rules for a successful execution of your code and business goals .