The Artificial Intelligence revolution is taking a new turn. Rather than trying to get machines to emulate people, today’s AI researchers are devising a new raft of technologies that are decidedly not modelled on human intelligence. Normally when we think about AI we think about robots that are designed to think, look and act like people. But AI today is being designed in areas where computers are vastly superior to humans and AI is leveraging computer’s competitive advantage. By using probability based algorithms to derive meaning from huge amounts of data, researchers discovered that they don’t have to teach a computer how to accomplish a task, they can just show it what people did and let the machine figure out how to emulate the behaviour under similar circumstances.

This is resulting in very interesting new innovative developments. Last spring, Dow Jones launched Lexicon, a new reporting service that sends real-time financial news to professional investors. Nothing innovative in this itself, Dow Jones Newswires has made its name by publishing the kind of news that moves the stock market.

The difference here is that most of the professional investors subscribing to Lexicon aren’t human—they’re algorithms, the lines of code that govern an increasing amount of global trading activity—and they don’t read news the way humans do. They don’t need their information delivered in the form of a story or even in sentences. They just want data—the hard, actionable information that those words represent.

You can read the latest about these facinating developments below or in Wired Magazine, it’s fascinating stuff!

Algorithms Take Control of Wall Street
By Felix Salmon and Jon Stokes – Wired January 201
1

Lexicon packages the news in a way that its robo-clients can understand. It scans every Dow Jones story in real time, looking for textual clues that might indicate how investors should feel about a stock. It then sends that information in machine-readable form to its algorithmic subscribers, which can parse it further, using the resulting data to inform their own investing decisions. Lexicon has helped automate the process of reading the news, drawing insight from it, and using that information to buy or sell a stock. The machines aren’t there just to crunch numbers anymore; they’re now making the decisions.
That increasingly describes the entire financial system. Over the past decade, algorithmic trading has overtaken the industry. From the single desk of a startup hedge fund to the gilded halls of Goldman Sachs, computer code is now responsible for most of the activity on Wall Street. (By some estimates, computer-aided high-frequency trading now accounts for about 70 percent of total trade volume.) Increasingly, the market’s ups and downs are determined not by traders competing to see who has the best information or sharpest business mind but by algorithms feverishly scanning for faint signals of potential profit.
Algorithms have become so ingrained in our financial system that the markets could not operate without them. At the most basic level, computers help prospective buyers and sellers of stocks find one another—without the bother of screaming middlemen or their commissions. High-frequency traders, sometimes called flash traders, buy and sell thousands of shares every second, executing deals so quickly, and on such a massive scale, that they can win or lose a fortune if the price of a stock fluctuates by even a few cents. Other algorithms are slower but more sophisticated, analyzing earning statements, stock performance, and newsfeeds to find attractive investments that others may have missed. The result is a system that is more efficient, faster, and smarter than any human.
It is also harder to understand, predict, and regulate. Algorithms, like most human traders, tend to follow a fairly simple set of rules. But they also respond instantly to ever-shifting market conditions, taking into account thousands or millions of data points every second. And each trade produces new data points, creating a kind of conversation in which machines respond in rapid-fire succession to one another’s actions. At its best, this system represents an efficient and intelligent capital allocation machine, a market ruled by precision and mathematics rather than emotion and fallible judgment.
But at its worst, it is an inscrutable and uncontrollable feedback loop. Individually, these algorithms may be easy to control but when they interact they can create unexpected behaviors—a conversation that can overwhelm the system it was built to navigate. On May 6, 2010, the Dow Jones Industrial Average inexplicably experienced a series of drops that came to be known as the flash crash, at one point shedding some 573 points in five minutes. Less than five months later, Progress Energy, a North Carolina utility, watched helplessly as its share price fell 90 percent. Also in late September, Apple shares dropped nearly 4 percent in just 30 seconds, before recovering a few minutes later.
These sudden drops are now routine, and it’s often impossible to determine what caused them. But most observers pin the blame on the legions of powerful, superfast trading algorithms—simple instructions that interact to create a market that is incomprehensible to the human mind and impossible to predict.
For better or worse, the computers are now in control.
Ironically enough, the notion of using algorithms as trading tools was born as a way of empowering traders. Before the age of electronic trading, large institutional investors used their size and connections to wrangle better terms from the human middlemen that executed buy and sell orders. “We were not getting the same access to capital,” says Harold Bradley, former head of American Century Ventures, a division of a midsize Kansas City investment firm. “So I had to change the rules.”
Bradley was among the first traders to explore the power of algorithms in the late ’90s, creating approaches to investing that favored brains over access. It took him nearly three years to build his stock-scoring program. First he created a neural network, painstakingly training it to emulate his thinking—to recognize the combination of factors that his instincts and experience told him were indicative of a significant move in a stock’s price.
But Bradley didn’t just want to build a machine that would think the same way he did. He wanted his algorithmically derived system to look at stocks in a fundamentally different—and smarter—way than humans ever could. So in 2000, Bradley assembled a team of engineers to determine which characteristics were most predictive of a stock’s performance. They identified a number of variables—traditional measurements like earnings growth as well as more technical factors. Altogether, Bradley came up with seven key factors, including the judgment of his neural network, that he thought might be useful in predicting a portfolio’s performance.
He then tried to determine the proper weighting of each characteristic, using a publicly available program from UC Berkeley called the differential evolution optimizer. Bradley started with random weightings—perhaps earnings growth would be given twice the weight of revenue growth, for example. Then the program looked at the best-performing stocks at a given point in time. It then picked 10 of those stocks at random and looked at historical data to see how well the weights predicted their actual performance. Next the computer would go back and do the same thing all over again—with a slightly different starting date or a different starting group of stocks. For each weighting, the test would be run thousands of times to get a thorough sense of how those stocks performed. Then the weighting would be changed and the whole process would run all over again. Eventually, Bradley’s team collected performance data for thousands of weightings.
Once this process was complete, Bradley collected the 10 best-performing weightings and ran them once again through the differential evolution optimizer. The optimizer then mated those weightings—combining them to create 100 or so offspring weightings. Those weightings were tested, and the 10 best were mated again to produce another 100 third-generation offspring. (The program also introduced occasional mutations and randomness, on the off chance that one of them might produce an accidental genius.) After dozens of generations, Bradley’s team discovered ideal weightings. (In 2007, Bradley left to manage the Kauffman Foundation’s $1.8 billion investment fund and says he can no longer discuss his program’s performance.)
Bradley’s effort was just the beginning. Before long, investors and portfolio managers began to tap the world’s premier math, science, and engineering schools for talent. These academics brought to trading desks sophisticated knowledge of AI methods from computer science and statistics.
And they started applying those methods to every aspect of the financial industry. Some built algorithms to perform the familiar function of discovering, buying, and selling individual stocks (a practice known as proprietary, or “prop,” trading). Others devised algorithms to help brokers execute large trades—massive buy or sell orders that take a while to go through and that become vulnerable to price manipulation if other traders sniff them out before they’re completed. These algorithms break up and optimize those orders to conceal them from the rest of the market. (This, confusingly enough, is known as algorithmic trading.) Still others are used to crack those codes, to discover the massive orders that other quants are trying to conceal. (This is called predatory trading.)
The result is a universe of competing lines of code, each of them trying to outsmart and one-up the other. “We often discuss it in terms of The Hunt for Red October, like submarine warfare,” says Dan Mathisson, head of Advanced Execution Services at Credit Suisse. “There are predatory traders out there that are constantly probing in the dark, trying to detect the presence of a big submarine coming through. And the job of the algorithmic trader is to make that submarine as stealth as possible.”
Meanwhile, these algorithms tend to see the market from a machine’s point of view, which can be very different from a human’s. Rather than focus on the behavior of individual stocks, for instance, many prop-trading algorithms look at the market as a vast weather system, with trends and movements that can be predicted and capitalized upon. These patterns may not be visible to humans, but computers, with their ability to analyze massive amounts of data at lightning speed, can sense them.
The partners at Voleon Capital Management, a three-year-old firm in Berkeley, California, take this approach. Voleon engages in statistical arbitrage, which involves sifting through enormous pools of data for patterns that can predict subtle movements across a whole class of related stocks.
Situated on the third floor of a run-down office building, Voleon could be any other Bay Area web startup. Geeks pad around the office in jeans and T-shirts, moving amid half-open boxes and scribbled whiteboards. Cofounder Jon McAuliffe is a stats wonk from Berkeley and Harvard University whose rè9sumè9 includes a stint at Amazon.com working on the company’s recommendation engine. The other cofounder, CEO Michael Kharitonov, is a computer scientist from Berkeley and Stanford who formerly ran a networking startup.
To hear them describe it, their trading strategy bears more resemblance to those data-analysis projects than to classical investing. Indeed, McAuliffe and Kharitonov say that they don’t even know what their bots are looking for or how they reach their conclusions. “What we say is ‘Here’s a bunch of data. Extract the signal from the noise,’” Kharitonov says. “We don’t know what that signal is going to be like.”
“The kind of trading strategies our system uses are not the kind of strategies that humans use,” Kharitonov continues. “We’re not competing with humans, because when you’re trading thousands of stocks simultaneously, trying to capture very, very small changes, the human brain is just not good at that. We’re playing on a different field, trying to exploit effects that are too complex for the human brain. They require you to look at hundreds of thousands of things simultaneously and to be trading a little bit of each stock. Humans just can’t do that.”
In late September, the Commodity Futures Trading Commission and the Securities and Exchange Commission released a 104-page report on the May 6 flash crash. The culprit, the report determined, was a “large fundamental trader” that had used an algorithm to hedge its stock market position. The trade was executed in just 20 minutes—an extremely aggressive time frame, which triggered a market plunge as other algorithms reacted, first to the sale and then to one another’s behavior. The chaos produced seemingly nonsensical trades—shares of Accenture were sold for a penny, for instance, while shares of Apple were purchased for $100,000 each. (Both trades were subsequently canceled.) The activity briefly paralyzed the entire financial system.
The report offered some belated clarity about an event that for months had resisted easy interpretation. Legislators and regulators, spooked by behavior they couldn’t explain, much less predict or prevent, began taking a harder look at computer trading. In the wake of the flash crash, Mary Schapiro, chair of the Securities and Exchange Commission, publicly mused that humans may need to wrest some control back from the machines. “Automated trading systems will follow their coded logic regardless of outcome,” she told a congressional subcommittee, “while human involvement likely would have prevented these orders from executing at absurd prices.” Delaware senator Ted Kaufman sounded an even louder alarm in September, taking to the Senate floor to declare, “Whenever there is a lot of money surging into a risky area, where change in the market is dramatic, where there is no transparency and therefore no effective regulation, we have a prescription for disaster.”
In the months after the flash crash, the SEC announced a variety of measures to prevent anything like it from occurring again. In June, it imposed circuit breakers, rules that automatically halt trading if a stock’s price fluctuates by more than 10 percent in five minutes. (In September, the SEC’s Schapiro announced that the agency might tweak the circuit breakers to prevent unnecessary freezes.) The agency is considering requiring trading algorithms to include a governor, which limits the size and speed at which trades can be executed. And it has also proposed the creation of a so-called consolidated audit trail, a single database that would collect information on every trade and execution, and which would—in the words of an SEC press release—”help regulators keep pace with new technology and trading patterns in the markets.” Others have suggested implementing a transaction tax, which would impose a particular burden on massive, lightning-fast trades.
But these are not ways of controlling the algorithms—they are ways of slowing them down or stopping them for a few minutes. That’s a tacit admission that the system has outgrown the humans that created it. Today a single stock can receive 10,000 bids per second; that deluge of data overwhelms any attempt to create a simple cause-and-effect narrative. “Our financial markets have become a largely automated adaptive dynamical system, with feedback,” says Michael Kearns, a computer science professor at the University of Pennsylvania who has built algorithms for various Wall Street firms. “There’s no science I’m aware of that’s up to the task of understanding its potential implications.”
For individual investors, trading with algorithms has been a boon: Today, they can buy and sell stocks much faster, cheaper, and easier than ever before. But from a systemic perspective, the stock market risks spinning out of control. Even if each individual algorithm makes perfect sense, collectively they obey an emergent logic—artificial intelligence, but not artificial human intelligence. It is, simply, alien, operating at the natural scale of silicon, not neurons and synapses. We may be able to slow it down, but we can never contain, control, or comprehend it. It’s the machines’ market now; we just trade in it.

TomorrowToday Global