Browsed by
Tag: complexity

Review of Frank Pasquale’s “A Rule of Persons, Not Machines: The Limits of Legal Automation” – Article by Adam Alonzi

Review of Frank Pasquale’s “A Rule of Persons, Not Machines: The Limits of Legal Automation” – Article by Adam Alonzi

Adam Alonzi


From the beginning Frank Pasquale, author of The Black Box Society: The Secret Algorithms That Control Money and Information, contends in his new paper “A Rule of Persons, Not Machines: The Limits of Legal Automation” that software, given its brittleness, is not designed to deal with the complexities of taking a case through court and establishing a verdict. As he understands it, an AI cannot deviate far from the rules laid down by its creator. This assumption, which is not even quite right at the present time, only slightly tinges an otherwise erudite, sincere, and balanced coverage of the topic. He does not show much faith in the use of past cases to create datasets for the next generation of paralegals, automated legal services, and, in the more distant future, lawyers and jurists.

Lawrence Zelanik has noted that when taxes were filed entirely on paper, provisions were limited to avoid unreasonably imposing irksome nuances on the average person. Tax-return software has eliminated this “complexity constraint.” He goes on to state that without this the laws, and the software that interprets it, are akin to a “black box” for those who must abide by them. William Gale has said taxes could be easily computed for “non-itemizers.” In other words, the government could use information it already has to present a “bill” to this class of taxpayers, saving time and money for all parties involved. However, simplification does not always align with everyone’s interests. TurboTax’s business, which is built entirely on helping ordinary people navigate the labyrinth is the American federal income tax, noticed a threat to its business model. This prompted it to put together a grassroots campaign to fight such measures. More than just another example of a business protecting its interests, it is an ominous foreshadowing of an escalation scenario that will transpire in many areas if and when legal AI becomes sufficiently advanced.

Pasquale writes: “Technologists cannot assume that computational solutions to one problem will not affect the scope and nature of that problem. Instead, as technology enters fields, problems change, as various parties seek to either entrench or disrupt aspects of the present situation for their own advantage.”

What he is referring to here, in everything but name, is an arms race. The vastly superior computational powers of robot lawyers may make the already perverse incentive to make ever more Byzantine rules ever more attractive to bureaucracies and lawyers. The concern is that the clauses and dependencies hidden within contracts will quickly explode, making them far too detailed even for professionals to make sense of in a reasonable amount of time. Given that this sort of software may become a necessary accoutrement in most or all legal matters means that the demand for it, or for professionals with access to it, will expand greatly at the expense of those who are unwilling or unable to adopt it. This, though Pasquale only hints at it, may lead to greater imbalances in socioeconomic power. On the other hand, he does not consider the possibility of bottom-up open-source (or state-led) efforts to create synthetic public defenders. While this may seem idealistic, it is fairly clear that the open-source model can compete with and, in some areas, outperform proprietary competitors.

It is not unlikely that within subdomains of law that an array of arms races can and will arise between synthetic intelligences. If a lawyer knows its client is guilty, should it squeal? This will change the way jurisprudence works in many countries, but it would seem unwise to program any robot to knowingly lie about whether a crime, particularly a serious one, has been committed – including by omission. If it is fighting against a punishment it deems overly harsh for a given crime, for trespassing to get a closer look at a rabid raccoon or unintentional jaywalking, should it maintain its client’s innocence as a means to an end? A moral consequentialist, seeing no harm was done (or in some instances, could possibly have been done), may persist in pleading innocent. A synthetic lawyer may be more pragmatic than deontological, but it is not entirely correct, and certainly shortsighted, to (mis)characterize AI as only capable of blindly following a set of instructions, like a Fortran program made to compute the nth member of the Fibonacci series.

Human courts are rife with biases: judges give more lenient sentences after taking a lunch break (65% more likely to grant parole – nothing to spit at), attractive defendants are viewed favorably by unwashed juries and trained jurists alike, and the prejudices of all kinds exist against various “out” groups, which can tip the scales in favor of a guilty verdict or to harsher sentences. Why then would someone have an aversion to the introduction of AI into a system that is clearly ruled, in part, by the quirks of human psychology?

DoNotPay is an an app that helps drivers fight parking tickets. It allows drivers with legitimate medical emergencies to gain exemptions. So, as Pasquale says, not only will traffic management be automated, but so will appeals. However, as he cautions, a flesh-and-blood lawyer takes responsibility for bad advice. The DoNotPay not only fails to take responsibility, but “holds its client responsible for when its proprietor is harmed by the interaction.” There is little reason to think machines would do a worse job of adhering to privacy guidelines than human beings unless, as mentioned in the example of a machine ratting on its client, there is some overriding principle that would compel them to divulge the information to protect several people from harm if their diagnosis in some way makes them as a danger in their personal or professional life. Is the client responsible for the mistakes of the robot it has hired? Should the blame not fall upon the firm who has provided the service?

Making a blockchain that could handle the demands of processing purchases and sales, one that takes into account all the relevant variables to make expert judgements on a matter, is no small task. As the infamous disagreement over the meaning of the word “chicken” in Frigaliment v. B.N.S International Sales Group illustrates, the definitions of what anything is can be a bit puzzling. The need to maintain a decent reputation to maintain sales is a strong incentive against knowingly cheating customers, but although cheating tends to be the exception for this reason, it is still necessary to protect against it. As one official on the  Commodity Futures Trading Commission put it, “where a smart contract’s conditions depend upon real-world data (e.g., the price of a commodity future at a given time), agreed-upon outside systems, called oracles, can be developed to monitor and verify prices, performance, or other real-world events.”

Pasquale cites the SEC’s decision to force providers of asset-backed securities to file “downloadable source code in Python.” AmeriCredit responded by saying it  “should not be forced to predict and therefore program every possible slight iteration of all waterfall payments” because its business is “automobile loans, not software development.” AmeriTrade does not seem to be familiar with machine learning. There is a case for making all financial transactions and agreements explicit on an immutable platform like blockchain. There is also a case for making all such code open source, ready to be scrutinized by those with the talents to do so or, in the near future, by those with access to software that can quickly turn it into plain English, Spanish, Mandarin, Bantu, Etruscan, etc.

During the fallout of the 2008 crisis, some homeowners noticed the entities on their foreclosure paperwork did not match the paperwork they received when their mortgages were sold to a trust. According to Dayen (2010) many banks did not fill out the paperwork at all. This seems to be a rather forceful argument in favor of the incorporation of synthetic agents into law practices. Like many futurists Pasquale foresees an increase in “complementary automation.” The cooperation of chess engines with humans can still trounce the best AI out there. This is a commonly cited example of how two (very different) heads are better than one.  Yet going to a lawyer is not like visiting a tailor. People, including fairly delusional ones, know if their clothes fit. Yet they do not know whether they’ve received expert counsel or not – although, the outcome of the case might give them a hint.

Pasquale concludes his paper by asserting that “the rule of law entails a system of social relationships and legitimate governance, not simply the transfer and evaluation of information about behavior.” This is closely related to the doubts expressed at the beginning of the piece about the usefulness of data sets in training legal AI. He then states that those in the legal profession must handle “intractable conflicts of values that repeatedly require thoughtful discretion and negotiation.” This appears to be the legal equivalent of epistemological mysterianism. It stands on still shakier ground than its analogue because it is clear that laws are, or should be, rooted in some set of criteria agreed upon by the members of a given jurisdiction. Shouldn’t the rulings of law makers and the values that inform them be at least partially quantifiable? There are efforts, like EthicsNet, which are trying to prepare datasets and criteria to feed machines in the future (because they will certainly have to be fed by someone!).  There is no doubt that the human touch in law will not be supplanted soon, but the question is whether our intuition should be exalted as guarantee of fairness or a hindrance to moving beyond a legal system bogged down by the baggage of human foibles.

Adam Alonzi is a writer, biotechnologist, documentary maker, futurist, inventor, programmer, and author of the novels A Plank in Reason and Praying for Death: A Zombie Apocalypse. He is an analyst for the Millennium Project, the Head Media Director for BioViva Sciences, and Editor-in-Chief of Radical Science News. Listen to his podcasts here. Read his blog here.

Another View of Aging Science: That We Don’t Know Enough – Article by Reason

Another View of Aging Science: That We Don’t Know Enough – Article by Reason

The New Renaissance Hat
Reason
June 27, 2014
******************************

Early this month I pointed out an example of the viewpoint on aging research that focuses on drugs, lifestyle, and metabolic manipulation and sees present work in that area to be a matter of significant and ongoing process. I disagree, for reasons that were explained in that post. Today, I’ll take a glance at a different view of the science of aging and longevity, one that is far more popular in the mainstream research community, and with which I also vehemently disagree.

Researchers in this field might be loosely divided into three camps, which are as follows ordered from largest to smallest: (a) those who study aging as a phenomenon without seeking to produce treatments, (b) those who see to slow aging through development of means to alter the operation of metabolism, such as calorie restriction mimetic drugs, and (c) those who aim to produce rejuvenation biotechnology capable of reversing aging. The vast majority of the aging research community at present consider that too little is known of the details of the progression of aging to make significant inroads in the design of treatments, and that the way forward is fundamental research with little hope of meaningful application for the foreseeable future. This attitude is captured here:

Let me ask you this: ‘Why can’t we cure death yet?’

Quote:

We can’t ‘cure death’ because biology is extremely complicated. Without a fundamental understanding of how biological organisms work on a molecular level, we’re left to educated guesses on how to fix things that are breaking in the human body. Trying to cure disease without a full understanding of the underlying principles is like trying to travel to the moon without using Newton’s laws of motion.

The reason we haven’t cured death is because we don’t really understand life.

This is only half true, however. It is true if your goal is to slow down aging by engineering metabolism into a new state of safe operation in which the damage of aging accumulates more slowly. This is an enormous project. It is harder than anything that has been accomplished by humanity to date, measured on any reasonable scale of complexity. The community has only a few footholds in the vast sea of interactions that make up the progression of metabolism and damage through the course of aging, and this is despite the fact that there exists an easily obtained, very well studied altered state of metabolism that does in fact slow aging and extend life. Calorie restriction can be investigated in almost all laboratory species, and has been the subject of intense scrutiny for more than a decade now. Yet that barely constitutes a start on the long road of figuring out how to replicate the effects of calorie restriction on metabolism, let alone how to set off into the unknown to build an even better metabolic state of operation.

Listing these concerns is not even to start in on the fact that even if clinicians could perfectly replicate the benefits of calorie restriction, these effects are still modest in the grand scheme of things. It probably won’t add more than ten years to your life, and it won’t rejuvenate the old, nor restore any of their lost functionality. It is a way of slowing down remaining harm, not repairing the harm that has happened. All in all it seems like a poor use of resources.

People who argue that we don’t understand enough of aging to treat it are conveniently omitting the fact that the research community does in fact have a proven, time-tested consensus list of the causes of aging. These are the fundamental differences between old tissue and young tissue, the list of changes that are not in and of themselves caused by any other process of aging. This is the damage that is the root of aging. There are certainly fierce arguments over which of these are more important and how in detail they actually interact with one another and metabolism to cause frailty, disease, and death. I’ve already said as much: researchers are still in the early days of producing the complete map of how aging progresses at the detail level. The actual list of damage and change is not much debated, however: that is settled science.

Thus if all you want to do is produce good treatments that reverse the effects of aging, you don’t need to know every detail of the progression of aging. You just need to remove the root causes. It doesn’t matter which of them are more or less important, just remove them all, and you’ll find out which were more or less important in the course of doing so – and probably faster than those who are taking the slow and stead scholarly route of investigation. If results are what we want to see then instead of studying ever more esoteric little corners of our biology, researchers might change focus on ways to repair the known forms of damage that cause aging. In this way treatments can be produced that actually rejuvenate patients, and unlike methods of slowing aging will benefit the old by reversing and preventing age-related disease.

This is exactly analogous to the long history of building good bridges prior to the modern age of computer simulation and materials science. With the advent of these tools engineers can now build superb bridges, of a quality and size that would once have been impossible. But the engineers of ancient Rome built good bridges: bridges that allowed people to cross rivers and chasms and some of which still stand today. Victorian engineers built better bridges to facilitate commerce that have stood the test of time, and they worked with little more than did the Romans in comparison to today’s technologies. So the aging research community could begin to build their bridges now, we don’t have to wait for better science. Given that we are talking about aging, and the cost of aging is measured in tens of millions of lives lost and hundreds of millions more left suffering each and every year, it is amazing to me that there are not more initiatives focused on taking what is already known and settled about the causes of aging and using that knowledge to build rejuvenation treatments.

What we see instead is a field largely focused on doing nothing but gathering data, and where there are researchers interesting in producing treatments, they are almost all focused on metabolic engineering to slow aging. The long, hard road to nowhere helpful. Yet repairing the known damage of aging is so very obviously the better course for research and development when compared to the prospect of an endless exploration and cataloging of metabolism. If we want the chance of significant progress towards means of treating aging in our lifetime, only SENS and other repair-based approaches have a shot at delivering. Attempts to slow aging are only a distraction: they will provide a growing flow of new knowledge of our biochemistry and the details of aging, but that knowledge isn’t needed in order to work towards effective treatments for aging today.

Reason is the founder of The Longevity Meme (now Fight Aging!). He saw the need for The Longevity Meme in late 2000, after spending a number of years searching for the most useful contribution he could make to the future of healthy life extension. When not advancing the Longevity Meme or Fight Aging!, Reason works as a technologist in a variety of industries. 
***

This work is reproduced here in accord with a Creative Commons Attribution license. It was originally published on FightAging.org.

Putting Randomness in Its Place – Video by G. Stolyarov II

Putting Randomness in Its Place – Video by G. Stolyarov II

A widespread misunderstanding of the meaning of the term “randomness” often results in false generalizations made regarding reality. In particular, the view of randomness as metaphysical, rather than epistemological, is responsible for numerous commonplace fallacies.

Reference
– “Putting Randomness in Its Place” – Essay by G. Stolyarov II