Browsed by
Tag: trading

Review of Frank Pasquale’s “A Rule of Persons, Not Machines: The Limits of Legal Automation” – Article by Adam Alonzi

Review of Frank Pasquale’s “A Rule of Persons, Not Machines: The Limits of Legal Automation” – Article by Adam Alonzi

Adam Alonzi


From the beginning Frank Pasquale, author of The Black Box Society: The Secret Algorithms That Control Money and Information, contends in his new paper “A Rule of Persons, Not Machines: The Limits of Legal Automation” that software, given its brittleness, is not designed to deal with the complexities of taking a case through court and establishing a verdict. As he understands it, an AI cannot deviate far from the rules laid down by its creator. This assumption, which is not even quite right at the present time, only slightly tinges an otherwise erudite, sincere, and balanced coverage of the topic. He does not show much faith in the use of past cases to create datasets for the next generation of paralegals, automated legal services, and, in the more distant future, lawyers and jurists.

Lawrence Zelanik has noted that when taxes were filed entirely on paper, provisions were limited to avoid unreasonably imposing irksome nuances on the average person. Tax-return software has eliminated this “complexity constraint.” He goes on to state that without this the laws, and the software that interprets it, are akin to a “black box” for those who must abide by them. William Gale has said taxes could be easily computed for “non-itemizers.” In other words, the government could use information it already has to present a “bill” to this class of taxpayers, saving time and money for all parties involved. However, simplification does not always align with everyone’s interests. TurboTax’s business, which is built entirely on helping ordinary people navigate the labyrinth is the American federal income tax, noticed a threat to its business model. This prompted it to put together a grassroots campaign to fight such measures. More than just another example of a business protecting its interests, it is an ominous foreshadowing of an escalation scenario that will transpire in many areas if and when legal AI becomes sufficiently advanced.

Pasquale writes: “Technologists cannot assume that computational solutions to one problem will not affect the scope and nature of that problem. Instead, as technology enters fields, problems change, as various parties seek to either entrench or disrupt aspects of the present situation for their own advantage.”

What he is referring to here, in everything but name, is an arms race. The vastly superior computational powers of robot lawyers may make the already perverse incentive to make ever more Byzantine rules ever more attractive to bureaucracies and lawyers. The concern is that the clauses and dependencies hidden within contracts will quickly explode, making them far too detailed even for professionals to make sense of in a reasonable amount of time. Given that this sort of software may become a necessary accoutrement in most or all legal matters means that the demand for it, or for professionals with access to it, will expand greatly at the expense of those who are unwilling or unable to adopt it. This, though Pasquale only hints at it, may lead to greater imbalances in socioeconomic power. On the other hand, he does not consider the possibility of bottom-up open-source (or state-led) efforts to create synthetic public defenders. While this may seem idealistic, it is fairly clear that the open-source model can compete with and, in some areas, outperform proprietary competitors.

It is not unlikely that within subdomains of law that an array of arms races can and will arise between synthetic intelligences. If a lawyer knows its client is guilty, should it squeal? This will change the way jurisprudence works in many countries, but it would seem unwise to program any robot to knowingly lie about whether a crime, particularly a serious one, has been committed – including by omission. If it is fighting against a punishment it deems overly harsh for a given crime, for trespassing to get a closer look at a rabid raccoon or unintentional jaywalking, should it maintain its client’s innocence as a means to an end? A moral consequentialist, seeing no harm was done (or in some instances, could possibly have been done), may persist in pleading innocent. A synthetic lawyer may be more pragmatic than deontological, but it is not entirely correct, and certainly shortsighted, to (mis)characterize AI as only capable of blindly following a set of instructions, like a Fortran program made to compute the nth member of the Fibonacci series.

Human courts are rife with biases: judges give more lenient sentences after taking a lunch break (65% more likely to grant parole – nothing to spit at), attractive defendants are viewed favorably by unwashed juries and trained jurists alike, and the prejudices of all kinds exist against various “out” groups, which can tip the scales in favor of a guilty verdict or to harsher sentences. Why then would someone have an aversion to the introduction of AI into a system that is clearly ruled, in part, by the quirks of human psychology?

DoNotPay is an an app that helps drivers fight parking tickets. It allows drivers with legitimate medical emergencies to gain exemptions. So, as Pasquale says, not only will traffic management be automated, but so will appeals. However, as he cautions, a flesh-and-blood lawyer takes responsibility for bad advice. The DoNotPay not only fails to take responsibility, but “holds its client responsible for when its proprietor is harmed by the interaction.” There is little reason to think machines would do a worse job of adhering to privacy guidelines than human beings unless, as mentioned in the example of a machine ratting on its client, there is some overriding principle that would compel them to divulge the information to protect several people from harm if their diagnosis in some way makes them as a danger in their personal or professional life. Is the client responsible for the mistakes of the robot it has hired? Should the blame not fall upon the firm who has provided the service?

Making a blockchain that could handle the demands of processing purchases and sales, one that takes into account all the relevant variables to make expert judgements on a matter, is no small task. As the infamous disagreement over the meaning of the word “chicken” in Frigaliment v. B.N.S International Sales Group illustrates, the definitions of what anything is can be a bit puzzling. The need to maintain a decent reputation to maintain sales is a strong incentive against knowingly cheating customers, but although cheating tends to be the exception for this reason, it is still necessary to protect against it. As one official on the  Commodity Futures Trading Commission put it, “where a smart contract’s conditions depend upon real-world data (e.g., the price of a commodity future at a given time), agreed-upon outside systems, called oracles, can be developed to monitor and verify prices, performance, or other real-world events.”

Pasquale cites the SEC’s decision to force providers of asset-backed securities to file “downloadable source code in Python.” AmeriCredit responded by saying it  “should not be forced to predict and therefore program every possible slight iteration of all waterfall payments” because its business is “automobile loans, not software development.” AmeriTrade does not seem to be familiar with machine learning. There is a case for making all financial transactions and agreements explicit on an immutable platform like blockchain. There is also a case for making all such code open source, ready to be scrutinized by those with the talents to do so or, in the near future, by those with access to software that can quickly turn it into plain English, Spanish, Mandarin, Bantu, Etruscan, etc.

During the fallout of the 2008 crisis, some homeowners noticed the entities on their foreclosure paperwork did not match the paperwork they received when their mortgages were sold to a trust. According to Dayen (2010) many banks did not fill out the paperwork at all. This seems to be a rather forceful argument in favor of the incorporation of synthetic agents into law practices. Like many futurists Pasquale foresees an increase in “complementary automation.” The cooperation of chess engines with humans can still trounce the best AI out there. This is a commonly cited example of how two (very different) heads are better than one.  Yet going to a lawyer is not like visiting a tailor. People, including fairly delusional ones, know if their clothes fit. Yet they do not know whether they’ve received expert counsel or not – although, the outcome of the case might give them a hint.

Pasquale concludes his paper by asserting that “the rule of law entails a system of social relationships and legitimate governance, not simply the transfer and evaluation of information about behavior.” This is closely related to the doubts expressed at the beginning of the piece about the usefulness of data sets in training legal AI. He then states that those in the legal profession must handle “intractable conflicts of values that repeatedly require thoughtful discretion and negotiation.” This appears to be the legal equivalent of epistemological mysterianism. It stands on still shakier ground than its analogue because it is clear that laws are, or should be, rooted in some set of criteria agreed upon by the members of a given jurisdiction. Shouldn’t the rulings of law makers and the values that inform them be at least partially quantifiable? There are efforts, like EthicsNet, which are trying to prepare datasets and criteria to feed machines in the future (because they will certainly have to be fed by someone!).  There is no doubt that the human touch in law will not be supplanted soon, but the question is whether our intuition should be exalted as guarantee of fairness or a hindrance to moving beyond a legal system bogged down by the baggage of human foibles.

Adam Alonzi is a writer, biotechnologist, documentary maker, futurist, inventor, programmer, and author of the novels A Plank in Reason and Praying for Death: A Zombie Apocalypse. He is an analyst for the Millennium Project, the Head Media Director for BioViva Sciences, and Editor-in-Chief of Radical Science News. Listen to his podcasts here. Read his blog here.

Casino Banking – Article by Gerald P. O’Driscoll, Jr.

Casino Banking – Article by Gerald P. O’Driscoll, Jr.

The New Renaissance Hat
Gerald P. O’Driscoll, Jr.
July 15, 2012
******************************

JPMorgan Chase & Co., one of the nation’s leading banks, revealed in May that a London trader racked up losses reportedly amounting to $2.3 billion over a 15-day period. The losses averaged over $150 million per day, sometimes hitting $200 million daily. The bank originally stated the trades were done to hedge possible losses on assets that might suffer due to Europe’s economic woes. There is now doubt whether it was a hedge or just a risky financial bet.

A hedge is a financial transaction designed to offset possible losses in an asset or good already owned. The classic hedge occurs when a farmer sells his crop in a futures market for delivery at a specified date after harvesting. He sells today what he will only produce tomorrow, and locks in the price. If the price at harvest time is lower than today’s price, he makes money on the forward contract, while losing a corresponding amount of money on the crops in the ground. In a perfect hedge the gains and losses should exactly offset each other.

How did JPMorgan suffer such large losses on its hedges, and what are the lessons?

It appears the London trader entered into financial transactions on the basis of observed relationships among various bond indices. The market relationships broke down. The indices moved differently from what historical patterns or financial models predicted. Such a breakdown has been at the heart of a number of spectacular financial collapses, notably that of Long-Term Capital Management (LTCM) in 1998 and a number of others during the financial meltdown of 2007–08.

LTCM invested the money of rich clients in financial bets based on the expected relationships among the prices of various assets. According to Nicole Gelinas in After the Fall: Saving Capitalism from Wall Street—and Washington, at the time of its collapse LTCM had $2.3 billion of client money. By borrowing, it leveraged that investment 53 to 1. Further, it employed derivatives to further magnify its bets so that its total obligations were a fantastic $1.25 trillion.

A derivative is any security whose price movements depend on (are derived from) movements in an underlying asset. “Puts” and “calls” on equity shares are relatively simple derivatives familiar to many. Asset prices, like various bonds, move in predictable ways with respect to each other, and values of derivatives linked to the assets similarly move in a predictable fashion with respect to the prices of the underlying assets—in normal times.

But the summer of 1998 was not a normal time. There was turmoil in Asian financial markets, then Russia threatened to default on its domestic debt. Global credit and liquidity dried up, and LTCM could not fund itself. It collapsed spectacularly.

A decade later there was turmoil in housing finance. The housing bubble was bursting. Mortgage lenders were under pressure, and some were failing. Many mortgages had been packed together in mortgage-backed securities, which were sold to or guaranteed by Fannie Mae and Freddie Mac. Fannie and Freddie, allegedly private entities but in reality guaranteed by the government, were failing. Lehman Brothers, an investment bank, was heavily involved in housing finance; it borrowed short-term, even overnight, to finance long-term holdings; it employed heavy leverage; and it made liberal use of derivatives contracts. It declared bankruptcy on September 15, 2008.

The specifics varied between 1998 and 2008, and between LTCM and Lehman. But the reliance on certain asset prices moving in predictable fashion was one shared element. So, too, was the heavy use of borrowed money (leverage) and the reliance on derivatives contracts. The volatility of complex derivatives contracts led legendary investor Warren Buffett to characterize them as “financial weapons of mass destruction.”

The Usual Suspects

In short there is nothing new in what happened to JPMorgan. It claimed it was not trying to make risky financial bets, but hedge risks already booked on its balance sheet. While details of the trades that led to losses are sketchy at this writing, they apparently employed both leverage and derivatives. As documented here, these are elements present in major financial blowups and collapses going back decades (and further). LTCM, Lehman, and Fannie and Freddie all thought they had at least some of their risks hedged. But hedges have a tendency to unravel just when needed most: in times of financial turmoil. Even so, financial institutions permit their traders to make the same kinds of dangerous bets over and over again. We used to have financial crises every decade or so. Now the cycle seems to be halved.

In the past I have dubbed today’s banking practice of placing dangerous financial bets “casino banking.” It differs little from the activities conducted at gaming tables in Las Vegas and has little or no reference to the fundamentally healthy activity of matching viable businesses with capital and credit.

In a Cato Policy Analysis, “Capital Inadequacies: The Dismal Failure of the Basel Regime of Bank Capital Regulation,” Kevin Dowd and three coauthors examined some of the technical problems with standard risk models used by large banks. It is an exhaustive analysis, and I commend it to those interested. The authors delve into many issues, but concentrate on the many flaws of the complex mathematical models used by banks to control risks.

In August 2007 Goldman Sachs Chief Financial Officer David Viniar puzzled over a series of “25-standard deviation moves” in financial markets affecting Goldman. (Returns deviated from their expected values by 25 standard deviations, a measure of volatility.) Such moves should occur once every 10-to-the-137th-power years if the assumptions of the risk model were correct (a Gaussian, or “normal,” distribution of returns). As Dowd and his coauthors put it, “Such an event is about as likely as Hell freezing over. The occurrence of even a single such event is therefore conclusive proof that financial returns are not Gaussian—or even remotely so.” And yet there were several in a matter of days. In Dowd & Co.’s telling, the models lie, the banks swear to it, and the regulators pretend to believe them. All of this goes to answer how the losses at Morgan might have happened. Traders rely on flawed models to execute their trades.

Now to the Lessons

Major financial institutions continue to take on large risks. Why? Assume the trades made by Morgan really were to hedge the bank’s exposure to events in Europe. That implies, of course, that risky investments had already been put in place (since they then needed to be hedged). Additionally, the risks were so complex that even a highly skilled staff (which Morgan certainly employs) could not successfully execute hedges on them.

Reports indicate that senior management and the board of directors were aware of the trades and exercising oversight. The fact that the losses were incurred anyway confirms what many of us have been arguing. Major financial institutions are at once very large and very complex. They are too large and too complex to manage. That is in part what beset Citigroup in the 2000s and now Morgan, which has until now been recognized as a well-managed institution.

If ordinary market forces were at work, these institutions would shrink to manageable sizes and levels of complexity. Ordinary market forces are not at work, however. Public policy rewards size (and the complexity that accompanies it). Major financial institutions know from experience that they will be bailed out when they incur losses that threaten their survival. Morgan’s losses do not appear to fall into that category, but they illustrate how bad incentives lead to bad outcomes.

Minding Our Business

Some commentators have argued that politicians and the public have no business in Morgan’s losses. Only Morgan’s stockholders, who saw its share price drop over 9 percent in one day, and senior management and traders who lost their jobs should have an interest. But in fact losses incurred at major financial institutions are the business of taxpayers because government policy has made them their business.

Large financial institutions will continue taking on excessive risks so long as they know they can offload the losses onto taxpayers if needed. That is the policy summarized as “too big to fail.” Let us not forget the Troubled Asset Relief Program (TARP), signed into law by President George W. Bush in October 2008. It was a $700 billion boondoggle to transfer taxpayer money to stockholders and creditors of major banks—and to their senior management; don’t forget the bonuses paid out of the funds.

Banks may be too big and complex to close immediately, but no institution is too big to fail. Failure means the stockholders and possibly the bondholders are wiped out. Until that discipline is reintroduced (having once existed), there will be more big financial bets going bad at these banks.

Changing the bailout policy will not be easy because of what is known as the time-inconsistency problem. Having bailed out so many companies so many times, the federal government cannot credibly commit in advance not to do so in the future. It can say no to future bailouts today, but people know that when financial collapse hits tomorrow, government will say yes once again. The promises made today will not match the government’s future actions. There is inconsistency between words and deeds across time.

What to do in the meantime? The Volcker Rule was a modest attempt to rein in risk-taking. Former Fed Chairman Paul Volcker wanted to stop banks from making risky trades on their own books (as opposed to executing trades for customers). Industry lobbying has hopelessly complicated the rule and delayed its issuance.

Morgan’s chief executive officer, James Dimon, asserted the London trades would not have violated the rule. If true, it suggests that an even stronger rule needs to be in place. Various suggestions have been made to address excessive risk-taking by financial firms backed by the taxpayers. It is time to take them more seriously.

Gerald O’Driscoll is a senior fellow at the Cato Institute. With Mario J. Rizzo, he coauthored The Economics of Time and Ignorance.

This article was published by The Foundation for Economic Education and may be freely distributed, subject to a Creative Commons Attribution United States License, which requires that credit be given to the author.