Programming: Abacus to Apple

Computer programming, often shortened to coding, programming, or scripting) is the process of designing, writing, testing, debugging, and maintaining the source code of computer programs. The source code is written in programming languages like C++, C#, Smalltalk, Python, Java, etc.. Programming’s true purpose is the creation of instructions that tell computers how to perform specific tasks and exhibit desired behaviors. Writing code requires expertise in many subjects. These include knowledge of the domain, algorithms, and formal logic.

A debate has been raging over the extent to which writing programs is considered an art, an engineering discipline, or a craft. The reality is good programming should be the measured application of all three, and the goal is producing an evolvable, efficient software solution. It is important to note that the criteria varies considerably when defining software as efficient or evolvable. The field differs from other technical professions in that programmers do not need a license or any other certifications in order to call themselves programmers or software engineers. Due to the fact the discipline covers so many areas, that may or may not include critical applications, licensing is a question mark for the profession as a whole. The field is self-governed in most cases by the entities that require the programming, and in certain instances this can mean strict working environments. For example, the military uses AdaCore and requires a security clearance to do any programming work. The debate is still ongoing in the US, but in many parts of the world portraying oneself as a “professional software engineer” without licensing is illegal.

There is another debate going on in the field of programming that focuses on programming languages. It concerns the extent to which the language used to write a program affects the form the final program takes. This debate is similar to the one in linguistics and cognitive science surrounding the Sapir-Whorf Hypothesis. This hypothesis states that a spoken language’s nature influences the habitual thought of its speakers. In other words, different language patterns produce different patterns of thought. If true, the mechanisms of language condition the thoughts of its speakers, so representing the world perfectly through language becomes impossible.

The History of Programming

Simple arithmetic was the pinnacle of human computing for millennia. The abacus was the only mechanical device for computing numbers for thousands of years of human history. It was invented sometime around 2500 BC, and was not surpassed until the Antikythera Mechanism was invented around 100 AD. This device was used to track the lunar-to-solar cycle in order to hold the Olympiad on the same day in different years. In 1206 AD, the Kurdish scientist Al-Jazari constructed automata that used pegs and cams to sequentially trigger levers. These levers in turn operated percussion instruments that caused a small drummer to play various rhythms and patterns.

In 1801, the Jacquard Loom used a series of cards with holes punched in them representing the pattern to follow when weaving cloth. The machine could produce different weaves by using different cards, and this was its groundbreaking feature. Punch cards were used in 1830 to control an invention called the Analytical Engine. The very first computer program was written for this engine to calculate a sequence of Bernoulli numbers. The Industrial Revolution accelerated the development of computer programming into what we know today. Numerical calculation, predetermined operation and output, and conceptually easy to use instructions for organization and input were all products of this mechanical boom in human history.

Herman Hollerith, in the 1880s, invented a process by which data was recorded to a medium that could then be read by a machine. All prior readable media, such as punch cards, had been for giving instructions to drive machines instead of data they could use to perform specific tasks. The punch cards that he used were different in that they used a keypunch, sorter, and tabulator unit record machines to encode the information to the cards. These inventions formed the basis of the entire data processing industry. In 1896, he founded a company that later became the core of IBM. A control panel was added to Hollerith’s machine in 1906 that allowed it to do different tasks without having to be physically rebuilt. By the middle of the 20th century, there were several machines functioning as archaic computers, and each had control panels that allowed them to perform a sequence of operations. These were the first truly programmable machines.

The advent of von Neumann architecture allowed computer programs to be stored in computer memory. The earliest programs had to be crafted using the instructions, or elementary operations, of the particular machine, and these instructions often had to be written in binary notation. Different model computers used different instructions to perform the same tasks due to the limitations of early programming languages. This was the case until assembly languages were developed to allow programmers to specify each instruction in a text format, and abbreviations for each operation code were substituted for numbers an addresses being written in symbolic form. Assembly language is more convenient, less prone to human error, and faster overall than machine language, but due to the fact that assembly languages are little more than different notations for machine languages any two machines with different instruction sets also have different assembly languages.

The first high level programming language, dubbed FORTRAN, was invented in 1954. A high level programming language is defined as one that allows for the creation of programs using abstract instructions. This allowed programmers to specify calculations by entering a formula directly into the code. The actual source code is converted into these instructions using a program called a compiler that translates the FORTRAN into machine language. In fact, the name FORTRAN actually stands for Formula Translation.

There have been many other languages developed including those, like COBOL, specifically for commercial programming. Yet, most programs were still entered using punch cards or even paper tape. As the late 1960s rolled around data storage devices and computer terminals had become inexpensive enough to directly type the programs into the computers. Text editors were developed allowing changes to be made to a program much easier than if using punch cards. Punch card errors meant that the card had to be destroyed, and a new one created to replace it.

Through the years, computers have made exponential leaps in processing power. This has allowed for the creation of new programming languages that are even more abstracted than the underlying hardware. These modern languages include C++, Python, Visual Basic, SQL, Perl, Ruby, C#, Haskell, HTML, PHP, Java, and Objective-C. There are literally dozens more programming languages available for use in the modern era. These high-level languages often come with greater overhead, but the increase in computer speed has made these languages much more usable and practical than in the past. Languages like these are typically easier to learn, and allow the programmer to work more efficiently and with less overall code. However, they are still impractical for some programs that require low-level hardware or maximum processing speed is vital.

Computer programming is a popular career today; particularly in the developed world. Due to the high cost of these programmers in the developed world, some programming has been outsourced to countries where the labor costs are low. These low cost alternatives have caused instability in the profession in developed countries just as was the case with manufacturing.

Modern Programming

The field of programming in the modern world is a very complex and lucrative industry. There are technological breakthroughs happening all the time, so a programmer must be a student their entire career in order to maintain their skills at a high level. It takes serious dedication and energy to stay on top of an ever-changing field. This is imperative, however, if the programmer wants to stay relevant and educated on the latest languages and their uses.

The approach to development in the software industry is as varied as the languages used to write programs. There are some fundamental properties that every program must satisfy in order to a successful venture. The following properties are among the most relevant:

  • Reliability: This indicates how often a program’s results are correct. Conceptual correctness of algorithms, logic errors, and programming mistakes can all affect this metric.
  • Usability: This basically refers to what is known as “the ergonomics” of the program. This is the overall ease of use of the program for its intended purpose. The usability of a program includes a wide range of textual, graphical, and hardware elements that improve the clarity, cohesiveness, completeness, and clarity of a program’s user interface.
  • Robustness: This is how well a program anticipates problems not due to programmer error. These issues can include incorrect or corrupt data, lack of necessary resources, and user error.
  • Maintainability: The ease with which a program can be modified by developers in order to improve or customize, fix bugs, or adapt it to new environments. This is where best practices during initial development make all the difference. The end user may never notice this property, but it can affect the fate of a program over the long term.
  • Portability: This is the range of hardware and operating system platforms on which the code can be interpreted and run. This depends greatly on the facilities provided by the platforms. These facilities include system resources, expected behavior of the hardware, and availability of platform specific compilers for the language of the code.
  • Efficiency: The amount of system resources a program uses is known as its efficiency. The less resource usage the better is the rule in computer programming.

Final Thoughts

The development of programming throughout human history has always been an exciting and rewarding pursuit. The rewards we as a species have reaped from this avenue of endeavor are exhilarating and life-changing. The concept, when realized fully in our time, is one of the most beautiful and meaningful creations of the human mind since the advent of language itself. The possibilities open to humanity thanks to this field are endless, and each new day brings another revelation to us all.

Lift Conversions Blog – What is Programming? by Spencer Wade