There is absolutely no doubt that our esteemed editor is well and truly hooked – more so than even his passion for that well-known amber liquid that he savours on the odd occasion he is rewarded for making a publication deadline and being allowed to visit his local without having to sneak out.
For proof of this new addiction look no further than the fact that this month’s cover story is the eighth time in a year that he has selected Artificial Intelligence (AI) for special coverage in this venerated publication.
Based on the fact that he appears to have awarded AI some sort of equivalent of an Oscar for the contribution the technology is going to play in our lives, on this occasion the cover story promises readers a “deep dive” into how AI, robotics and automation will change the world as they become embedded in a rapidly increasing number of practical applications.
And now for the “deep drive” (as the Chunderer prefers to define it).
To understand AI let’s take a really deep drive back to the very beginning and note the fact that AI as a phrase was first used to describe a particular aspect of computerised technology (which is all it is and ever will be) way back in 1955 when the phrase was first coined by John McCarthy, a professor at Dartmouth College, a private Ivy League research university in the US that was established in 1769.
McCarthy, along with three colleagues, proposed a research workshop to explore how machines could simulate human intelligence based on every aspect of learning being precisely described so that a machine could be made to simulate it.
The following year, the project took place and is widely regarded as the official birth date of AI based on a gathering of pioneering scientists who explored and discussed how machines could use language, form abstractions, and solve problems typically reserved for humans.
While no singular “thinking machine” emerged from the event, it laid the foundation for AI research and development that followed and as we know it today.
And now, without inviting any form of controversy about who can claim and justify “industry leadership” and the ability to provide and “quantum leap” based on the supply of “cutting edge solutions” – and any other claims that are couched in the marketing jargon that the IT industry is so famous for – let’s look at back at some of the technology milestones and where AI began to surface as a realistic option for some really smart applications to be based on.
To simplify this look back, there is no doubt whatsoever that a single company provides one of the most inspiring track records of achievement at the forefront of any development initiatives across an industry sector or market segment over time.
And that company is IBM – one of the oldest commercial computing companies still in operation today. Founded in 1911 as the Computing-Tabulating-Recording Company (CTR) IBM was adopted as the company’s official name in 1924.
Throughout the decades, since its establishment as IBM, the company has pioneered many major technological advancements. These include the design and use of punched-card tabulating machines, which were widely used for business and government data processing until the advent of the mainframe in the 1940s.
During the 1940s and 1950s, IBM competed alongside UNIVAC until the launch of the revolutionary IBM System/360 family of mainframe computers that standardised computing across virtually all commercial applications.
Then in 1981 IBM introduced the PC (Personal Computer) – an event that in many ways can be compared to what is happening in the so-called automation space today when measured against personal productivity standards against the “dumb terminals” of the mainframe era.
During the 1990s IBM shifted to focus on software, cloud computing, and artificial intelligence.
The success of this shift and it’s breakthroughs in the field of software-based learning and AI were illustrated when Deep Blue, a chess-playing supercomputer, famously defeated Garry Kasparov, the reigning world chess champion, in a six-game match in May 1997.
This victory is regarded as a historic moment in artificial intelligence as it marked the first time a computer defeated a human world champion under standard tournament conditions.
This was followed on 2 February 2011 when Watson, a supercomputer designed for natural language processing and data analysis, gained worldwide fame by defeating the Jeopardy! TV quiz show’s two foremost all-time champions, Brad Rutter and Ken Jennings.
And there you have it – a deep dive into the history of computing that proves that AI and all the other “cutting edge” technology that gets my esteemed editor’s blood racing is nothing more than a continuous evolution of man’s ability to do magic when he puts his mind to it.