Browsed by
Tag: Black Box

The Rational Argumentator’s Sixteenth Anniversary Manifesto

The Rational Argumentator’s Sixteenth Anniversary Manifesto

The New Renaissance Hat
G. Stolyarov II
September 2, 2018

On August 31, 2018, The Rational Argumentator completed its sixteenth year of publication. TRA is older than Facebook, YouTube, Twitter, and Reddit; it has outlasted Yahoo! Geocities, Associated Content, Helium, and most smaller online publications in philosophy, politics, and current events. Furthermore, the age of TRA now exceeds half of my lifetime to date. During this time, while the Internet and the external world shifted dramatically many times over, The Rational Argumentator strived to remain a bulwark of consistency – accepting growth in terms of improvement of infrastructure and accumulation of content, but not the tumultuous sweeping away of the old to ostensibly make room for the new. We do not look favorably upon tumultuous upheaval; the future may look radically different from the past and present, but ideally should be built in continuity with both, and with preservation of any beneficial aspects that can possibly be preserved.

The Rational Argumentator has experienced unprecedented visitation during its sixteenth year, receiving 1,501,473 total page views as compared to 1,087,149 total page views during its fifteenth year and 1,430,226 during its twelfth year, which had the highest visitation totals until now. Cumulative lifetime TRA visitation has reached 12,481,258 views. Even as TRA’s publication rate has slowed to 61 features during its sixteenth year – due to various time commitments, such as the work of the United States Transhumanist Party (which published 147 features on its website during the same timeframe) – the content of this magazine has drawn increasing interest. Readers, viewers, and listeners are gravitating toward both old and new features, as TRA generally aims to publish works of timeless relevance. The vaster our archive of content, the greater variety of works and perspectives it spans, the more issues it engages with and reflects upon – the more robust and diverse our audience becomes; the more insulated we become against the vicissitudes of the times and the fickle fluctuations of public sentiment and social-media fads.

None of the above is intended to deny or minimize the challenges faced by those seeking to articulate rational, nuanced, and sophisticated ideas on the contemporary Internet. Highly concerning changes to the consumption and availability of information have occurred over the course of this decade, including the following trends.

  • While social media have been beneficial in terms of rendering personal communication at a distance more viable, the fragmentation of social media and the movement away from the broader “open Internet” have seemingly accelerated. Instead of directly navigating and returning to websites of interest, most people now access content almost exclusively through social-media feeds. Even popular and appealing content may often become constrained within the walls of a particular social network or sub-group thereof, simply due to the “black-box” algorithms of that social network, which influence without explanation who sees what and when, and which may not be reflective of what those individuals would have preferred to see. The constantly changing nature of these algorithms renders it difficult for content creators to maintain steady connections with their audiences. If one adds to the mix the increasing and highly troubling tendency of social networks to actively police the content their members see, we may be returning to a situation where most people find their content inexplicably curated by “gatekeepers” who, in the name of objectivity and often with unconscious biases in play, often end up advancing ulterior agendas not in the users’ interests.
  • While the democratization of access to knowledge and information on the Internet has undoubtedly had numerous beneficial effects, we are also all faced with the problem of “information overload” and the need to prioritize essential bits information within an immense sea which we observe daily, hourly, and by the minute. The major drawback of this situation – in which everyone sees everything in a single feed, often curated by the aforementioned inexplicable algorithms – is the difficulty of even locating information that is more than a day old, as it typically becomes buried far down within the social-media feed. Potential counters exist to this tendency – namely, through the existence of old-fashioned, static websites which publish content that does not adjust and that is fixed to a particular URL, which could be bookmarked and visited time and again. But what proportion of the population has learned this technique of bookmarking and revisitation of older content – instead of simply focusing on the social-media feed of the moment? It is imperative to resist the short-termist tendencies that the design of contemporary social media seems to encourage, as indulging these tendencies has had deleterious impacts on attention spans in an entire epoch of human culture.
  • Undeniably, much interesting and creative content has proliferated on the Internet, with opportunities for both deliberate and serendipitous learning, discovery, and intellectual enrichment. Unfortunately, the emergence of such content has coincided with deleterious shifts in cultural norms away from the expectation of concerted, sequential focus (the only way that human minds can actually achieve at a high level) and toward incessant multi-tasking and the expectation of instantaneous response to any external stimulus, human or automated. The practice of dedicating a block of time to read an article, watch a video, or listen to an audio recording – once a commonplace behavior – has come to be a luxury for those who can wrest segments of time and space away from the whirlwind of external stimuli and impositions within which humans (irrespective of material resources or social position) are increasingly expected to spin. It is fine to engage with others and venture into digital common spaces occasionally or even frequently, but in order for such interactions to be productive, one has to have meaningful content to offer; the creation of such content necessarily requires time away from the commons and a reclamation of the concept of private, solitary focus to read, contemplate, apply, and create.
  • In an environment where the immediate, recent, and short-term-oriented content tends to attract the most attention, this amplifies the impulsive, range-of-the-moment, reactive emotional tendencies of individuals, rather than the thoughtful, long-term-oriented, constructive, rational tendencies. Accordingly, political and cultural discourse become reduced to bitter one-liners that exacerbate polarization, intentional misunderstanding of others, and toxicity of rhetoric. The social networks where this has been most salient have been those that limit the number of characters per post and prioritize quantity of posts over quality and the instantaneity of a response over its thoughtfulness. The infrastructures whose design presupposes that everyone’s expressions are of equal value have produced a reduction of discourse to the lowest common denominator, which is, indeed, quite low. Even major news outlets, where some quality selection is still practiced by the editors, have found that user comments often degenerate into a toxic morass. This is not intended to deny the value of user comments and interaction, in a properly civil and constructive context; nor is it intended to advocate any manner of censorship. Rather, this observation emphatically underscores the need for a return to long-form, static articles and longer written exchanges more generally as the desirable prevailing form of intellectual discourse. (More technologically intensive parallels to this long-form discourse would include long-form audio podcasts or video discussion panels where there is a single stream of conversation or narrative instead of a flurry of competing distractions.) Yes, this form of discourse takes more time and skill. Yes, this means that people have to form complex, coherent thoughts and express them in coherent, grammatically correct sentences. Yes, this means that fewer people will have the ability or inclination participate in that form of discourse. And yes, that may well be the point – because less of the toxicity will make its way completely through the structures which define long-form discourse – and because anyone who can competently learn the norms of long-form discourse, as they have existed throughout the centuries, will remain welcome to take part. Those who are not able or willing to participate can still benefit by spectating and, in the process, learning and developing their own skills.

The Internet was intended, by its early adopters and adherents of open Internet culture – including myself – to catalyze a new Age of Enlightenment through the free availability of information that would break down old prejudices and enable massively expanded awareness of reality and possibilities for improvement. Such a possibility remains, but humans thus far have fallen massively short of realizing it – because the will must be present to utilize constructively the abundance of available resources. Cultivating this will is no easy task; The Rational Argumentator has been pursuing it for sixteen years and will continue to do so. The effects are often subtle, indirect, long-term – more akin to the gradual drift of continents than the upward ascent of a rocket. And yet progress in technology, science, and medicine continues to occur. New art continues to be created; new treatises continue to be written. Some people do learn, and some people’s thinking does improve. There is no alternative except to continue to act in pursuit of a brighter future, and in the hope that others will pursue it as well – that, cumulatively, our efforts will be sufficient to avert the direst crises, make life incrementally safer, healthier, longer, and more comfortable, and, as a civilization, persist beyond the recent troubled times. The Rational Argumentator is a bulwark against the chaos – hopefully one among many – and hopefully many are at work constructing more bulwarks. Within the bulwarks, great creations may have room to develop and flourish – waiting for the right time, once the chaos subsides or is pacified by Reason, to emerge and beautify the world. In the meantime, enjoy all that can be found within our small bulwark, and visit it frequently to help it expand.

Gennady Stolyarov II,
Editor-in-Chief, The Rational Argumentator

This essay may be freely reproduced using the Creative Commons Attribution Share-Alike International 4.0 License, which requires that credit be given to the author, G. Stolyarov II. Find out about Mr. Stolyarov here.

Review of Frank Pasquale’s “A Rule of Persons, Not Machines: The Limits of Legal Automation” – Article by Adam Alonzi

Review of Frank Pasquale’s “A Rule of Persons, Not Machines: The Limits of Legal Automation” – Article by Adam Alonzi

Adam Alonzi

From the beginning Frank Pasquale, author of The Black Box Society: The Secret Algorithms That Control Money and Information, contends in his new paper “A Rule of Persons, Not Machines: The Limits of Legal Automation” that software, given its brittleness, is not designed to deal with the complexities of taking a case through court and establishing a verdict. As he understands it, an AI cannot deviate far from the rules laid down by its creator. This assumption, which is not even quite right at the present time, only slightly tinges an otherwise erudite, sincere, and balanced coverage of the topic. He does not show much faith in the use of past cases to create datasets for the next generation of paralegals, automated legal services, and, in the more distant future, lawyers and jurists.

Lawrence Zelanik has noted that when taxes were filed entirely on paper, provisions were limited to avoid unreasonably imposing irksome nuances on the average person. Tax-return software has eliminated this “complexity constraint.” He goes on to state that without this the laws, and the software that interprets it, are akin to a “black box” for those who must abide by them. William Gale has said taxes could be easily computed for “non-itemizers.” In other words, the government could use information it already has to present a “bill” to this class of taxpayers, saving time and money for all parties involved. However, simplification does not always align with everyone’s interests. TurboTax’s business, which is built entirely on helping ordinary people navigate the labyrinth is the American federal income tax, noticed a threat to its business model. This prompted it to put together a grassroots campaign to fight such measures. More than just another example of a business protecting its interests, it is an ominous foreshadowing of an escalation scenario that will transpire in many areas if and when legal AI becomes sufficiently advanced.

Pasquale writes: “Technologists cannot assume that computational solutions to one problem will not affect the scope and nature of that problem. Instead, as technology enters fields, problems change, as various parties seek to either entrench or disrupt aspects of the present situation for their own advantage.”

What he is referring to here, in everything but name, is an arms race. The vastly superior computational powers of robot lawyers may make the already perverse incentive to make ever more Byzantine rules ever more attractive to bureaucracies and lawyers. The concern is that the clauses and dependencies hidden within contracts will quickly explode, making them far too detailed even for professionals to make sense of in a reasonable amount of time. Given that this sort of software may become a necessary accoutrement in most or all legal matters means that the demand for it, or for professionals with access to it, will expand greatly at the expense of those who are unwilling or unable to adopt it. This, though Pasquale only hints at it, may lead to greater imbalances in socioeconomic power. On the other hand, he does not consider the possibility of bottom-up open-source (or state-led) efforts to create synthetic public defenders. While this may seem idealistic, it is fairly clear that the open-source model can compete with and, in some areas, outperform proprietary competitors.

It is not unlikely that within subdomains of law that an array of arms races can and will arise between synthetic intelligences. If a lawyer knows its client is guilty, should it squeal? This will change the way jurisprudence works in many countries, but it would seem unwise to program any robot to knowingly lie about whether a crime, particularly a serious one, has been committed – including by omission. If it is fighting against a punishment it deems overly harsh for a given crime, for trespassing to get a closer look at a rabid raccoon or unintentional jaywalking, should it maintain its client’s innocence as a means to an end? A moral consequentialist, seeing no harm was done (or in some instances, could possibly have been done), may persist in pleading innocent. A synthetic lawyer may be more pragmatic than deontological, but it is not entirely correct, and certainly shortsighted, to (mis)characterize AI as only capable of blindly following a set of instructions, like a Fortran program made to compute the nth member of the Fibonacci series.

Human courts are rife with biases: judges give more lenient sentences after taking a lunch break (65% more likely to grant parole – nothing to spit at), attractive defendants are viewed favorably by unwashed juries and trained jurists alike, and the prejudices of all kinds exist against various “out” groups, which can tip the scales in favor of a guilty verdict or to harsher sentences. Why then would someone have an aversion to the introduction of AI into a system that is clearly ruled, in part, by the quirks of human psychology?

DoNotPay is an an app that helps drivers fight parking tickets. It allows drivers with legitimate medical emergencies to gain exemptions. So, as Pasquale says, not only will traffic management be automated, but so will appeals. However, as he cautions, a flesh-and-blood lawyer takes responsibility for bad advice. The DoNotPay not only fails to take responsibility, but “holds its client responsible for when its proprietor is harmed by the interaction.” There is little reason to think machines would do a worse job of adhering to privacy guidelines than human beings unless, as mentioned in the example of a machine ratting on its client, there is some overriding principle that would compel them to divulge the information to protect several people from harm if their diagnosis in some way makes them as a danger in their personal or professional life. Is the client responsible for the mistakes of the robot it has hired? Should the blame not fall upon the firm who has provided the service?

Making a blockchain that could handle the demands of processing purchases and sales, one that takes into account all the relevant variables to make expert judgements on a matter, is no small task. As the infamous disagreement over the meaning of the word “chicken” in Frigaliment v. B.N.S International Sales Group illustrates, the definitions of what anything is can be a bit puzzling. The need to maintain a decent reputation to maintain sales is a strong incentive against knowingly cheating customers, but although cheating tends to be the exception for this reason, it is still necessary to protect against it. As one official on the  Commodity Futures Trading Commission put it, “where a smart contract’s conditions depend upon real-world data (e.g., the price of a commodity future at a given time), agreed-upon outside systems, called oracles, can be developed to monitor and verify prices, performance, or other real-world events.”

Pasquale cites the SEC’s decision to force providers of asset-backed securities to file “downloadable source code in Python.” AmeriCredit responded by saying it  “should not be forced to predict and therefore program every possible slight iteration of all waterfall payments” because its business is “automobile loans, not software development.” AmeriTrade does not seem to be familiar with machine learning. There is a case for making all financial transactions and agreements explicit on an immutable platform like blockchain. There is also a case for making all such code open source, ready to be scrutinized by those with the talents to do so or, in the near future, by those with access to software that can quickly turn it into plain English, Spanish, Mandarin, Bantu, Etruscan, etc.

During the fallout of the 2008 crisis, some homeowners noticed the entities on their foreclosure paperwork did not match the paperwork they received when their mortgages were sold to a trust. According to Dayen (2010) many banks did not fill out the paperwork at all. This seems to be a rather forceful argument in favor of the incorporation of synthetic agents into law practices. Like many futurists Pasquale foresees an increase in “complementary automation.” The cooperation of chess engines with humans can still trounce the best AI out there. This is a commonly cited example of how two (very different) heads are better than one.  Yet going to a lawyer is not like visiting a tailor. People, including fairly delusional ones, know if their clothes fit. Yet they do not know whether they’ve received expert counsel or not – although, the outcome of the case might give them a hint.

Pasquale concludes his paper by asserting that “the rule of law entails a system of social relationships and legitimate governance, not simply the transfer and evaluation of information about behavior.” This is closely related to the doubts expressed at the beginning of the piece about the usefulness of data sets in training legal AI. He then states that those in the legal profession must handle “intractable conflicts of values that repeatedly require thoughtful discretion and negotiation.” This appears to be the legal equivalent of epistemological mysterianism. It stands on still shakier ground than its analogue because it is clear that laws are, or should be, rooted in some set of criteria agreed upon by the members of a given jurisdiction. Shouldn’t the rulings of law makers and the values that inform them be at least partially quantifiable? There are efforts, like EthicsNet, which are trying to prepare datasets and criteria to feed machines in the future (because they will certainly have to be fed by someone!).  There is no doubt that the human touch in law will not be supplanted soon, but the question is whether our intuition should be exalted as guarantee of fairness or a hindrance to moving beyond a legal system bogged down by the baggage of human foibles.

Adam Alonzi is a writer, biotechnologist, documentary maker, futurist, inventor, programmer, and author of the novels A Plank in Reason and Praying for Death: A Zombie Apocalypse. He is an analyst for the Millennium Project, the Head Media Director for BioViva Sciences, and Editor-in-Chief of Radical Science News. Listen to his podcasts here. Read his blog here.