Most people, if they think about the topic at all, probably imagine computer science involves the programming of computers. But what are computers? In most cases, these are just machines of one form or another. And what is programming? Well, it is the issuing of instructions (“commands” in the jargon of programming) for the machine to do something or other, or to achieve some state or other. Thus, we can view Computer Science as nothing more or less than the science of delegation.
When delegating a task to another person, we are likely to be more effective (as the delegator or commander) the more we know about the skills and capabilities and current commitments and attitudes of that person (the delegatee or commandee). So too with delegating to machines. Accordingly, a large part of theoretical computer science is concerned with exploring the properties of machines, or rather, the deductive properties of mathematical models of machines. Other parts of the discipline concern the properties of languages for commanding machines, including their meaning (their semantics) – this is programming language theory.
Because the vast majority of lines of program code nowadays are written by teams of programmers, not individuals, then much of computer science - part of the branch known as software engineering – is concerned with how to best organize and manage and evaluate the work of teams of people. Because most machines are controlled by humans and act in concert for or with or to humans, then another, related branch of this science of delegation deals with the study of human-machine interactions. In both these branches, computer science reveals itself to have a side which connects directly with the human and social sciences, something not true of the other sciences often grouped with Computer Science: pure mathematics, physics, or chemistry. With the rise of networked machines, we may find ourselves delegating tasks not simply to one machine, but to multiple machines, acting in concert or in parallel in some way. The branch of computer science known as distributed computing thus deals with delegation to, and co-ordination of, multiple machines. As a consequence of this, computer scientists think a lot about combinations of actions and concurrency, more than do researchers in any other discipline. This is exactly as we would expect for a science of delegation.
And from its modern beginnings 70 years ago, computer science has been concerned with trying to automate whatever can be automated – in other words, with delegating the task of delegating. This is the branch known as Artificial Intelligence. We have intelligent machines which can command other machines, and manage and control them in the same way that humans could. But not all bilateral relationships between machines are those of commander-and-subordinate. More often in distributed networks, machines are peers of one another, intelligent and autonomous (to varying degrees). Thus, commanding is useless – persuasion is what is needed for one intelligent machine to ensure that another machine does what the first desires, just as with human beings. And so, as one would expect in a science of delegation, computational argumentation arises as an important area of study.
Great progress is being made on our EPSRC inter-disciplinary seed-funding research project, Bridging the Gaps, which aims to strengthen the many connections between Informatics and other disciplines at King’s College London. More information here. Please contact us if you want to be involved.
The Journal of the Royal United Services Institute (RUSI) has just published our paper on cyber-weapons, where we seek to classify the various types of cyber attack software according to the degree of intelligence and autonomy they exhibit. The paper is available here, and is already the most highly downloaded paper on the publisher’s web-site for the journal, here.
T. Rid and P. McBurney : Cyber weapons. Journal of the Royal United Services Institute (RUSI), 157 (1), February 2012.
The Financial Times recently published a correspondence on Reverse Polish Notation, including this letter from one Peter Jaeger of Tokyo, Japan (published on 2011-09-30):
Sir, Your reader Chris Ludlam describes the input method of his HP12c as “reverse logic”. The correct term is “Reverse Polish”, which is not only far more colourful, but also gives credit to Jan Lukasiewicz, the logician who invented the original Polish Notation which American mathematicians later adapted for computers.”
While it is correct to say that American mathematicians adapted Reverse Polish Notation (RPN) for computers, this is not the whole story. The first person to speak publicly about using RPN for computer architectures was Australian – Charles Hamblin, an Australian philosopher and computer pioneer, speaking at a computer conference held in Salisbury, South Australia, in June 1957. (This was billed as “The First Australian Computer Conference”, but an earlier one had been held in 1951.) Hamblin’s work was published in the conference proceedings and later in a refereed article in the Computer Journal in 1962. Among the attendees at that conference was the British statistician and computer pioneer Maurice Wilkes, who later won an AM Turing Award (in 1967), as well as delegates from computer manufacturing companies.
The first computer manufacturing company to announce deployment of RPN in a commercial computer architecture was British – the English Electric Company (EEC), in their KDF9 machine, announced in 1960 and delivered in 1963. Burroughs, an American computer company, also delivered a computer using RPN in 1963, the Burroughs B5000, but this machine was only announced in 1961. Robert Barton, chief architect of the B5000, later wrote that he developed RPN independently of Hamblin, sometime in 1958.
So the first person to talk publicly about applying RPN to computers was Australian and the first computer company to say publicly they would actually do so was British. Not everything in computing happens first in the USA!
R. S. Barton : Ideas for computer systems organization: a personal survey. pp. 7-16 of: J. S. Jou (Editor): Software Engineering: Volume 1: Proceedings of the Third Symposium on Computer and Information Sciences held in Miami Beach, Florida, December 1969. New York, NY, USA: Academic Press.
C. L. Hamblin : An addressless coding scheme based on mathematical notation. Proceedings of the First Australian Conference on Computing and Data Processing, Salisbury, South Australia: Weapons Research Establishment, June 1957.
C. L. Hamblin : Translation to and from Polish notation. Computer Journal, 5: 210-213.
What happens when automated algorithms interact in ways unforeseen by their creators? One example is shown by two online book-pricing algorithms, automatically adjusting prices in response to each other’s price adjustments, in an infinitely-ascending dance of the algorithms:
A few weeks ago a postdoc in my lab logged on to Amazon to buy the lab an extra copy of Peter Lawrence’s The Making of a Fly – a classic work in developmental biology that we – and most other Drosophila developmental biologists – consult regularly. The book, published in 1992, is out of print. But Amazon listed 17 copies for sale: 15 used from $35.54, and 2 new from $1,730,045.91 (+$3.99 shipping).
Even rather simple adaptive systems can exhibit unintended global behaviours. Full story here. (HT: ER)
The Department of Informatics at King’s College London currently has a number of academic vacancies, in the Agents and Intelligent Systems (AIS) Group and in the Software Modeling and Applied Logic (SMAL) Group. The vacancies are for people at all academic levels – Lecturers (Assistant Professors), Senior Lecturers (Associate Professors), and full Professors.
Further details are available at these sites:
Lectureships in SMAL, and
The closing date is 20 May 2011.
A search algorithm is a computational procedure (an algorithm) for finding a particular object or objects in a larger collection of objects. Typically, these algorithms search for objects with desired properties whose identities are otherwise not yet known. Search algorithms (and search generally) has been an integral part of artificial intelligence and computer science this last half-century, since the first working AI program, designed to play checkers, was written in 1951-2 by Christopher Strachey. At each round, that program evaluated the alternative board positions that resulted from potential next moves, thereby searching for the “best” next move for that round.
The first search algorithm in modern times dates from 1895: a depth-first search algorithm to solve a maze, due to amateur French mathematician Gaston Tarry (1843-1913). Now, in a recent paper by logician Wilfrid Hodges, the date for the first search algorithm has been pushed back much further: to the third decade of the second millenium, the 1020s. Hodges translates and analyzes a logic text of Persian Islamic philosopher and mathematician, Ibn Sina (aka Avicenna, c. 980 – 1037) on methods for finding a proof of a syllogistic claim when some premises of the syllogism are missing. Representation of domain knowledge using formal logic and automated reasoning over these logical representations (ie, logic programming) has become a key way in which intelligence is inserted into modern machines; searching for proofs of claims (“potential theorems”) is how such intelligent machines determine what they know or can deduce. It is nice to think that theorem-proving is almost 1000 years old.
B. Jack Copeland : What is Artificial Intelligence?
Wilfrid Hodges : Ibn Sina on analysis: 1. Proof search. or: abstract state machines as a tool for history of logic. pp. 354-404, in: A. Blass, N. Dershowitz and W. Reisig (Editors): Fields of Logic and Computation. Lecture Notes in Computer Science, volume 6300. Berlin, Germany: Springer. A version of the paper is available from Hodges’ website, here.
Gaston Tarry : La problem des labyrinths. Nouvelles Annales de Mathématiques, 14: 187-190.