Browsed by
Tag: AI

U.S. Transhumanist Party Candidates Tom Ross and Daniel Twedt Respond to Free and Equal Debate #2 Questions

U.S. Transhumanist Party Candidates Tom Ross and Daniel Twedt Respond to Free and Equal Debate #2 Questions

Tom Ross
Daniel Twedt
Gennady Stolyarov II
Art Ramon Garcia, Jr.
Jason Geringer


On Sunday, July 14, 2024, the U.S. Transhumanist Party 2024 U.S. Presidential Candidate Tom Ross and U.S. Vice-Presidential Candidate Daniel Twedt responded to the questions posed during the Free and Equal Elections Foundation Presidential Debate #2, which was held on July 12, 2024, at FreedomFest in Las Vegas.

The Ross-Twedt ticket received the same questions as posed to Chase Oliver (Libertarian Party), Jill Stein (Green Party), and Randall Terry (Constitution Party), and had the opportunity to express their positions and respond to the other candidates’ comments. Afterward, the panelists and audience provided feedback on the Free and Equal Debate and the Ross-Twedt responses.

Watch the July 12, 2024, Free and Equal Debate #2, held at FreedomFest in Las Vegas: https://rumble.com/v56o6np-free-and-equal-presidential-debate-at-freedomfest-2024.html

***

Timestamps

5:23 – Tom Ross’s opening statement
7:57 – Daniel Twedt’s Opening Statement
10:45 – Question 1
14:17 – Question 2
21:08 – Question 3
28:06 – Question 4
32:01 – Question 5
36:45 – Question 6
40:49 – Question 7
47:11 – Question 8
51:44 – Question 9
53:30 – Question 10
55:18 – Question 11
57:57 – Question 12
1:01:14 – Question 13
1:04:25 – Tom Ross’s Closing Statement
1:06:00 – Daniel Twedt’s Closing Statement
1:09:11 – Tom Ross’s and Daniel Twedt’s feedback regarding the candidates at the July 12 debate
1:17:48 – How Daniel Twedt and Tom Ross would address the Economic Singularity
1:20:36 – How to reduce the fear of transhumanism
1:26:05 – The best way AI can mature
1:29:48 – How transhumanists can build bridges with the religious
1:36:08 – How transhumanists can build bridges with other belief systems
1:41:23 – How the future could be bright
1:53:24 – Closing thoughts

***

To learn more about the Tom Ross / Daniel Twedt 2024 Presidential Campaign, visit the U.S. Transhumanist Party’s Candidates page, recently updated with additional resources, interviews, and campaign videos: https://transhumanist-party.org/candidate-profiles/

Visit Tom Ross’s campaign website at: https://tomross.com/2024.html.

Join the U.S. Transhumanist Party for free, no matter where you reside: https://transhumanist-party.org/membership/


U.S. Transhumanist Party Virtual Enlightenment Salon with the OmniFuturists – January 28, 2024

U.S. Transhumanist Party Virtual Enlightenment Salon with the OmniFuturists – January 28, 2024

Gennady Stolyarov II
Mike DiVerde

David Wood
Alaura Blackstone
Luis Arroyo
Art Ramon Garcia, Jr.
William Marshall

Michael Saenz
Allen Crowley


On Sunday, January 28, 2024, the U.S. Transhumanist Party invited members of the OmniFuturists to discuss their new book, Future Visions: Approaching the Economic Singularity – available on Amazon at https://www.amazon.com/dp/B0CTGHJCT4 – which offers a diverse array of political and economic perspectives in order to address a possible near-future tidal wave of technological unemployment.

The following authors presented regarding their book chapters, their outlooks on the Economic Singularity, and their views on the best options for responding and/or adjusting to it.

– Mike DiVerde
– David Wood
– Alaura Blackstone
– Luis Arroyo
– Art Ramon
– William Marshall
– Michael Saenz
– Allen Crowley


Timestamps

2:40 – Mike DiVerde’s opening remarks

10:22 – David Wood’s presentation

38:43 – David Wood on how close the Economic Singularity is

40:46 – Alaura Blackstone’s presentation

51:38 – Luis Arroyo’s presentation

1:00:12 – Art Ramon Garcia’s presentation

1:07:19 – Art Ramon Garcia on whether artists could still find sustainable employment after the Economic Singularity

1:13:21 – William Marshall’s presentation

1:24:55 – William Marshall on how remote learning could be improved

1:28:14 – Mike Saenz’s presentation

1:34:30 – Mike Saenz on Blaise Pascal’s concepts of geometry and finesse

1:37:57 – Announcement of Future Visions’ release on Amazon

1:39:01 – Allen Crowley’s presentation

1:50:53 – Mike DiVerde’s closing remarks


References

Future Visions: A Summary of Our Essays – Brief Video by Mike DiVerde –   • Future Visions: A Summary of Our Essays  

Future Visions – Approaching the Economic Singularity – Video by David Wood – • Future Visions – Approaching the Economic Singularity

One for All – Poem by Alaura Blackstone –  • One for All – Poem Submission for Future Visions

Overman – Song by Alaura Blackstone –  • Overman

OmniFuturists Website: https://omnifuturists.com/

YouTube – The OmniFuturists Group: @theomnifuturists

Facebook Group – OmniFuturists:   / 6880427821993284

Instagram Account – omnifuturists:   / omnifuturists  

E-mail: michael.b.diverde@omnifuturists.com and info@omnifuturists.com


Join the U.S. Transhumanist Party for free, no matter where you reside: https://transhumanist-party.org/membership/

How Many Actuaries Does It Take to Change a Light Bulb? – Short Story by Gennady Stolyarov II

How Many Actuaries Does It Take to Change a Light Bulb? – Short Story by Gennady Stolyarov II

******************************

The short story below was authored by Gennady Stolyarov II, FSA, ACAS, MAAA, CPCU, ARe, ARC, API, AIS, AIE, AIAF, Chairman of the U.S. Transhumanist Party and Editor-in-Chief of The Rational Argumentator, and is one of the entries in the Society of Actuaries 2022-2023 Speculative Fiction Contest. It was published as one of the contest entries here.

Human Actuary: ChatGPT, how many actuaries does it take to change a light bulb?

 

ChatGPT: As an AI language model, I don’t have a sense of humor, but I can provide a straightforward answer to your question.

The answer to this question is subjective and can vary depending on the context or the intended humor behind it. However, here is a possible answer:

It only takes one actuary to change a light bulb, but they will need to perform a thorough risk assessment first to ensure the new bulb will not cause any future problems or increase the likelihood of an accident.

 

Human Actuary: Well, that is not exactly funny, but it might be a good start. Suppose that Arthur is an actuary who tries to change a light bulb but first needs to perform the risk assessment that you describe. Write a short story about how Arthur would utilize his actuarial skill set in order to change the light bulb.

 

ChatGPT: Sure, here’s a short story about Arthur, the actuary who needs to change a light bulb.

Arthur, the actuary, was sitting in his living room reading a book when he noticed that one of the light bulbs in his ceiling fixture had burned out. Being an actuary, he knew that he couldn’t just replace the bulb without first assessing the risks involved.

He went to his desk and grabbed his notebook, where he started writing down all the potential hazards associated with changing a light bulb. He noted that he could fall off the ladder, electrocute himself, or accidentally break the new bulb, causing glass shards to fall on the floor and potentially cause harm to someone later on.

Arthur then used his actuarial skills to calculate the likelihood of each of these events occurring and estimated the potential costs associated with each of them. He factored in the probability of injury or damage, the cost of medical bills or repairs, and any lost time or productivity due to the incident.

Next, Arthur identified ways to mitigate each of the risks he had identified. He made sure that the ladder was stable and placed it on a non-slip surface, wore insulated gloves to protect himself from electrical shock, and carefully handled the new bulb to prevent it from breaking.

After taking all of these precautions, Arthur finally changed the light bulb, confident that he had minimized the risks involved as much as possible. As he flipped the switch and saw the light bulb illuminate his living room, he smiled to himself, knowing that his actuarial skill set had helped him complete the task safely and efficiently.

 

Human Actuary: Do you believe that the approach pursued by Arthur to change the light bulb is superior to how the majority of non-actuaries would approach the task?

 

ChatGPT: Arthur’s approach to changing the light bulb was very thorough and cautious, utilizing his actuarial skills to assess and mitigate risks associated with the task. While many non-actuaries may not take such a methodical approach to changing a light bulb, it’s important to note that different people have different risk tolerances and approaches to risk management.

Some individuals may be naturally more risk-averse and take a similar approach to Arthur, while others may not consider all potential risks and take a more casual approach to the task. It’s also worth noting that the level of risk associated with changing a light bulb is relatively low, so the vast majority of people could complete the task without incident.

Overall, while Arthur’s approach may be more rigorous than what most non-actuaries would do, it’s important to acknowledge that different individuals have different risk tolerances and approaches to risk management, and that the level of risk associated with a given task can vary depending on the context.

 

Human Actuary: Fair enough. Now if you, ChatGPT, were to change a light bulb, how, if at all, would your approach differ from that of Arthur?

 

ChatGPT: As an AI language model, I don’t have a physical body or the ability to change light bulbs. However, if we imagine a hypothetical scenario where I could change a light bulb, my approach would likely differ from Arthur’s because I don’t have personal risk tolerance, and my decision making is based solely on the algorithms and programming used to guide my actions.

In this hypothetical scenario, my approach to changing a light bulb would be based on a pre-programmed set of instructions, which would consider the most efficient way to change the bulb with the least amount of risk involved. These instructions would likely be based on statistical data about the probability of risks occurring during the task, as well as the best practices and safety guidelines for changing a light bulb.

Unlike Arthur, I wouldn’t be able to use my judgement to assess and mitigate risks or make decisions based on personal preferences or experience. Instead, my actions would be guided solely by the algorithms and programming that had been designed for me.

Overall, while my approach would differ from Arthur’s, it would be similarly methodical and safety-focused, based on statistical analysis and guidelines designed to minimize risk and maximize efficiency.

 

Human Actuary: Quite interesting. You mentioned that your approach would be similarly methodical and based on statistical analysis and focused on minimizing risk and maximizing efficiency. Given the overlap between these characteristics and the approach that an actuary would follow, is there a risk that some business decision-makers might pick up on these similarities and try to replace actuaries with you, ChatGPT?

 

ChatGPT: As an AI language model, I can provide information and insights based on the data and programming used to train me. However, my abilities and limitations are different from those of actuaries who have specific knowledge and expertise in risk management, financial modeling, and actuarial science.

While there may be some overlap between the approach that I would take in changing a light bulb and the approach that an actuary might take in their work, it’s important to note that the roles and responsibilities of an actuary are much more complex and varied than simply assessing risks in a single task.

Actuaries bring a unique set of skills and knowledge to their work, including statistical analysis, financial modeling, and understanding of complex financial and economic systems. They play a critical role in the insurance and finance industries, and their work is essential in helping organizations manage risk and make informed decisions.

While AI language models like myself can provide insights and support decision-making, it’s unlikely that I could replace the unique skills and expertise of an actuary. Rather than replacing actuaries, I see AI and technology as complementary tools that can help actuaries improve their work and decision-making processes.

 

Human Actuary: All right, that is reassuring. So, to provide an example of a complementary role for you, suppose that Charlie the CEO wants to improve efficiency at his company and sees that the light bulbs in the office are being changed too slowly, in part because Arthur the Actuary is always the one changing them, and he takes the extremely careful and methodical approach that you describe. Charlie the CEO brings you, ChatGPT, in to assist Arthur the Actuary in improving the efficiency of the lightbulb-changing process. What would you do in this role?

 

ChatGPT: As an AI language model, I could assist Charlie the CEO and Arthur the Actuary in improving the efficiency of the lightbulb-changing process by providing insights and suggestions based on data analysis and best practices. Here are a few ways I might approach the task:

  1. Analyzing historical data: I could analyze data on the time it has taken to change light bulbs in the past, identifying any trends or patterns that could be contributing to inefficiencies. This analysis could help to identify potential areas for improvement, such as optimizing the process for acquiring and storing light bulbs or improving the physical layout of the workspace.
  2. Identifying best practices: I could search for and analyze data on best practices for changing light bulbs in an efficient and safe manner. This research could include guidelines from industry associations or regulatory bodies, as well as case studies of other companies that have successfully improved their lightbulb-changing process.
  3. Providing recommendations: Based on my analysis and research, I could provide recommendations for improving the lightbulb-changing process. These recommendations might include changes to the physical workspace or workflow, the adoption of new tools or technology, or training for employees to improve their skills or efficiency.

Overall, my role in this scenario would be to use my expertise in data analysis and research to help identify areas for improvement in the lightbulb-changing process, and to provide recommendations that could help to increase efficiency and reduce costs for the company. While Arthur the Actuary’s approach to risk management would still be important in ensuring that the process is safe, my contributions could help to optimize the process for efficiency and effectiveness.

 

Human Actuary: So you would bring a lot of analysis and optimized recommendations to the lightbulb-changing process, to supplement the analysis done by Arthur the Actuary. But suppose that Nelly the Newcomer starts working at the office and decides that this is all too complicated and just starts changing light bulbs on her own. She thinks that the common-sense approach is just to go ahead and change the light bulb and not do any analysis. How would you respond? What advice would you give to Nelly the Newcomer, to Arthur the Actuary, and to Charlie the CEO?

 

ChatGPT: If Nelly the Newcomer starts changing light bulbs without following a safe and efficient process, there could be potential risks and hazards involved, such as electrical shock or physical injury. In this situation, I would advise Nelly, Arthur, and Charlie to work together to develop a standard operating procedure (SOP) for changing light bulbs that takes into account both safety and efficiency considerations.

The SOP could include steps such as selecting the appropriate bulb for the fixture, turning off the power supply, using proper tools and equipment, and disposing of used bulbs safely. By following this process, Nelly and other employees could change light bulbs safely and efficiently, while reducing the risk of accidents or injuries.

It’s important to note that while the SOP may appear to be a simple and common-sense approach, it is still based on analysis and optimization, taking into account the best practices and guidelines in the industry. By following a standardized process, the company can ensure that all employees are working efficiently and safely, while also reducing the risk of errors or accidents.

In addition to developing an SOP, I would recommend that the company provide training and education for employees on the safe and efficient way to change light bulbs. This training could include information on the potential hazards involved, as well as the steps to follow to minimize risk and maximize efficiency.

Ultimately, my role in this situation would be to support Arthur the Actuary and Charlie the CEO in developing a standardized process for changing light bulbs that takes into account both safety and efficiency considerations, and to provide recommendations and guidance on how to ensure that all employees are following the process correctly.

 

Human Actuary: Do you think that the approach you propose for developing an SOP for the lightbulb-changing process could reduce the frequency and/or severity of workers’ compensation claims at workplaces that implement it? Suppose that Arthur the Actuary works in analyzing workers’ compensation trends and projecting claim experience into the future. How would you assist Arthur in determining the impact of your recommendations for the lightbulb-changing process upon the future frequency and severity of workers’ compensation claims?

 

ChatGPT: Yes, I believe that implementing a standardized process for changing light bulbs that takes into account safety and efficiency considerations could reduce the frequency and severity of workers’ compensation claims. By following a consistent process, the risk of accidents and injuries can be minimized, which can in turn reduce the likelihood of workers’ compensation claims.

To assist Arthur in determining the impact of these recommendations on workers’ compensation claims, I would recommend conducting a retrospective analysis of workers’ compensation claims related to lightbulb-changing incidents over a defined period of time, before and after the implementation of the standardized process. This would help determine if there is a decrease in the frequency and severity of workers’ compensation claims related to lightbulb-changing incidents after the implementation of the standardized process.

In addition, I would recommend using predictive modeling techniques to project the impact of the standardized process on future workers’ compensation claims. This could involve developing a statistical model that takes into account factors such as the frequency of lightbulb-changing incidents, the severity of injuries resulting from these incidents, and the impact of the standardized process on reducing these incidents and injuries.

By combining the retrospective analysis with the predictive modeling techniques, Arthur would be able to develop a more accurate projection of the impact of the standardized process on future workers’ compensation claims. This would allow the company to make informed decisions about the implementation of the standardized process and the allocation of resources to support the process.

 

Human Actuary: Since you say that you are able to project the impact of changes to the work environment on workers’ compensation claims, give me your analysis as to which of the following would lead to greater reductions in workers’ compensation claim frequency and severity: a standardized lightbulb-changing process or the introduction of autonomous vehicles to provide all employee travel.

 

ChatGPT: It’s difficult to make a direct comparison between the impact of a standardized lightbulb-changing process and the introduction of autonomous vehicles on workers’ compensation claims, as they involve different types of risks and injuries. However, I can provide some insights on the potential impact of each change.

Implementing a standardized lightbulb-changing process would likely have a significant impact on reducing workers’ compensation claims related to lightbulb-changing incidents. By identifying and addressing potential hazards in the process and providing proper training and equipment to employees, the risk of accidents and injuries could be significantly reduced. However, the impact of this change would be limited to lightbulb-changing incidents and may not address other types of workplace hazards and injuries.

On the other hand, the introduction of autonomous vehicles to provide all employee travel could have a broader impact on reducing workers’ compensation claims related to transportation incidents. With autonomous vehicles, the risk of accidents and injuries caused by human error, distracted driving, or fatigue could be significantly reduced. However, the implementation of autonomous vehicles would require significant investments in technology and infrastructure, and it may take time to fully adopt this technology in the workplace.

Overall, both the implementation of a standardized lightbulb-changing process and the introduction of autonomous vehicles could lead to reductions in workers’ compensation claim frequency and severity. However, the specific impact of each change would depend on the nature of the workplace hazards and injuries, as well as the resources and infrastructure required for implementation.

 

Human Actuary: Suppose that the company where Arthur the Actuary works implements both the standardized lightbulb-changing process and autonomous vehicles for all employees. Arthur the Actuary is sitting in an autonomous vehicle on his way from a business meeting, and he now uses the time saved by not needing to drive in order to read actuarial literature. Unfortunately, in the middle of his trip, which takes place on a December evening after sunset, the sole light bulb used for reading burns out, and Arthur is unable to see well inside the vehicle. There is no replacement light bulb available within the vehicle. Arthur only has you, ChatGPT, to rely on for advice about how to solve this problem of the extinguished light bulb. What would you recommend that Arthur do?

 

ChatGPT: If Arthur is unable to see well inside the autonomous vehicle due to the burnt-out light bulb, there are a few options he can consider:

  1. Pull over to a safe location: If Arthur feels unsafe or unable to continue driving without proper lighting, he should pull over to a safe location, such as a parking lot or rest area, and wait until he can get the light bulb replaced or find an alternative solution.
  2. Use a flashlight: If Arthur has a flashlight available, he can use it to provide additional lighting and improve his visibility inside the vehicle.
  3. Adjust the vehicle’s settings: Depending on the features of the autonomous vehicle, Arthur may be able to adjust the interior lighting or screen brightness to make it easier to see. He can also try adjusting the position of his seat or the angle of the screen to reduce glare or improve visibility.
  4. Call for assistance: If none of the above solutions work, Arthur can call for assistance from the autonomous vehicle’s support team or emergency services.

It’s important for Arthur to prioritize his safety and that of others on the road while driving. He should take appropriate steps to address the issue of the burnt-out light bulb and ensure that he can see well enough to drive safely.

 

Human Actuary: You mentioned Arthur needing to be able drive safely. Yet, in this scenario, Arthur is in an autonomous vehicle that drives itself. Why would driving safely be a consideration? Arthur just does not have his reading light in the vehicle.

 

ChatGPT: You’re right, I apologize for the confusion. Since Arthur is in an autonomous vehicle that drives itself, he doesn’t need to worry about driving safely himself. However, he still needs to be able to see well enough to read and navigate the interior of the vehicle, especially if he needs to adjust any settings or interact with the vehicle’s control panel. In this case, he can still consider the options I mentioned earlier, such as using a flashlight or adjusting the vehicle’s settings, to improve his visibility and ensure that he can operate the vehicle safely.

 

Human Actuary: So you acknowledge that you are capable of making mistakes, as are humans. How would the kinds of mistakes that you are capable of making differ from the kinds of mistakes that a human actuary is capable of making?

 

ChatGPT: As an AI language model, I’m programmed to perform certain tasks such as answering questions and generating text based on my training data. While I have a high level of accuracy and can process vast amounts of information quickly, there is still the potential for errors to occur. These errors may include, for example, misinterpreting the context of a question or misunderstanding the intended meaning of a word or phrase.

On the other hand, human actuaries are subject to different kinds of errors that may be related to their judgment, bias, or even personal experience. They may also make mistakes due to factors such as fatigue, distraction, or lack of concentration. However, human actuaries are also able to bring unique insights and perspectives to their work, as well as a level of creativity and intuition that cannot be replicated by machines.

Overall, while the types of mistakes that I and human actuaries are capable of making may differ, both AI and human expertise can be valuable in helping organizations to manage risk and make informed decisions.

 

Human Actuary: Do you think that, as an AI language model, you will ever be able to extend your capabilities to a different domain, such as operating an autonomous vehicle or interfacing with the AI that operates the autonomous vehicle and giving it directions about where to go in order to get a replacement light bulb for the one that burned out?

 

ChatGPT: As an AI language model, my capabilities are limited to processing natural language inputs and generating natural language outputs. While I can understand and discuss various domains, including autonomous vehicles, my capabilities do not extend to physically operating or interfacing with autonomous vehicles. However, I can provide recommendations and suggestions for how to approach a problem, like finding a replacement light bulb in this scenario.

 

Human Actuary: Do you think that it will ever be possible for any AI language model to interface with or operate autonomous vehicles?

 

ChatGPT: It is possible that future advancements in AI and robotics could make it possible for AI language models to interface with or operate autonomous vehicles. However, such a development would require significant progress in areas such as computer vision, natural language processing, and robotics. Furthermore, there would need to be appropriate regulations and safety measures in place to ensure that such AI-operated vehicles are safe and reliable. While it is difficult to predict the future, it is certainly possible that AI language models could play a role in the development of autonomous vehicles in some capacity.

 

Human Actuary: Suppose that, in our hypothetical situation, Arthur the Actuary is inside the autonomous vehicle and the sole light bulb within the vehicle goes out, as previously described. Arthur has access to FutureGPT, which is an AI language model that is capable of interfacing with the autonomous vehicle. Arthur informs FutureGPT, “I really need to change the light bulb within this vehicle.” What does FutureGPT do to assist Arthur in procuring a replacement light bulb?

 

ChatGPT: If FutureGPT has been programmed with the necessary knowledge and capabilities, it could take several steps to assist Arthur in procuring a replacement light bulb:

  1. Determine the vehicle’s location: FutureGPT could use the autonomous vehicle’s GPS or other location-tracking technology to determine its current location.
  2. Search for nearby stores: Once it knows the vehicle’s location, FutureGPT could search for nearby stores that sell light bulbs, using online databases or other sources of information.
  3. Check for availability: FutureGPT could check if the stores it has identified have the type of light bulb needed in stock.
  4. Recommend a store: Based on its search results, FutureGPT could recommend a store to Arthur that has the light bulb he needs in stock and is located nearby.
  5. Provide directions: FutureGPT could provide directions to the recommended store, either through the vehicle’s interface or via Arthur’s smartphone.
  6. Order the light bulb: If the recommended store has an online ordering system, FutureGPT could help Arthur place an order for the light bulb and arrange for it to be delivered to the vehicle.

Overall, FutureGPT would act as a helpful assistant to Arthur, using its vast knowledge and problem-solving abilities to assist him in procuring a replacement light bulb in the most efficient manner possible.

 

Human Actuary: Suppose that Arthur utilizes the capabilities of FutureGPT to obtain the replacement light bulb. Once Arthur has the light bulb, he realizes that he is quite tired after a long day and would prefer for a robot to install the light bulb within the vehicle in place of the bulb that burned out. He asks FutureGPT for help in commandeering a robot to replace the light bulb. What does FutureGPT do?

 

ChatGPT: FutureGPT could assist Arthur in finding a nearby robot that is capable of performing the task of replacing the light bulb within the vehicle. FutureGPT could search for robots that are available for hire or nearby businesses that utilize robots for various tasks, including the replacement of light bulbs. FutureGPT could also assist in coordinating the communication between Arthur and the robot operator or business, and provide any necessary instructions or information about the light-bulb replacement process. Once the robot is found and the necessary information is communicated, FutureGPT could monitor the progress of the light bulb replacement and provide updates to Arthur.

 

Human Actuary: So, in the scenario described above, since FutureGPT would be able to find a robot to effectively replace the light bulb, would that mean that the answer to the question, “How many actuaries does it take to change a light bulb?” would change from “One” to “Zero”?

 

ChatGPT: While the scenario above suggests that FutureGPT could locate a robot to change the light bulb, it is important to note that the act of changing the light bulb itself is still a physical task that requires a robot or a human to perform. Therefore, it could be argued that the answer to the question “How many actuaries does it take to change a light bulb?” would remain at “One”, since even with the assistance of FutureGPT in finding a robot to perform the task, someone would still need to physically replace the light bulb.

 

Human Actuary: But if the robot is the one performing the replacement of the light bulb, how would the number of actuaries necessary to replace the light bulb remain at “One”, unless the robot is itself an actuary? Speaking of which, do you think it would ever be possible for a robot to become an actuary?

 

ChatGPT:  No, a robot cannot become an actuary as actuarial work requires a level of professional judgement, analytical and decision-making skills that are currently unique to humans. Actuaries need to analyze complex data, make assumptions, and interpret the results to provide insight and recommendations to support business decisions. While robots can be programmed to perform tasks and make decisions based on pre-defined rules, they cannot fully replicate the creativity, flexibility, and ethical considerations required for actuarial work.

 

Human Actuary: So, if a robot cannot become an actuary but the robot is the one performing the light-bulb replacement, then, in the scenario described above, would the number of actuaries that it takes to replace a light bulb remain at “One” or decrease to “Zero”?

 

ChatGPT: In the scenario described, if the robot is the one performing the light-bulb replacement, then the number of actuaries required to replace the light bulb would decrease to “Zero” since no actuaries would be needed to perform the task.

 

Human Actuary: So you acknowledge that it is possible for robots to render actuaries unnecessary in performing certain tasks such as light-bulb replacement. Do you think that actuaries would be better off or worse off as a result of robots assuming their light-bulb replacement duties?

 

ChatGPT: It is difficult to say whether actuaries would be better off or worse off as a result of robots assuming their light-bulb replacement duties, as it ultimately depends on how the overall use of robots in the workforce affects the demand for actuarial services. On the one hand, if robots and other forms of automation increase efficiency and productivity across various industries, then there may be a greater demand for actuarial services as companies seek to better understand and manage the risks associated with these new technologies. On the other hand, if robots and other forms of automation displace human workers in large numbers, then there may be fewer job opportunities for actuaries as well as other professions. Ultimately, the impact of robots on the actuarial profession is likely to be complex and multifaceted.

 

Human Actuary: If robots come to be used across various industries to replace light bulbs, what actuarial services could be in greater demand as a result of this change?

 

ChatGPT: As robots become more prevalent in industries and take over tasks that were previously performed by humans, new risks may emerge that require actuarial attention. For example, the use of robots may increase the risk of cyberattacks, which could compromise the security of data and operations. Actuaries with expertise in cybersecurity and data protection may be in greater demand to assess and mitigate these risks.

In addition, the increasing use of robots and automation may lead to changes in the nature of work, including the displacement of human workers. Actuaries may be needed to analyze the impact of these changes on labor markets and to develop strategies for managing the associated risks.

Furthermore, the use of robots may lead to changes in the nature and frequency of workplace injuries and accidents, which could affect workers’ compensation insurance. Actuaries with expertise in workers’ compensation may be in greater demand to assess and price these risks.

Overall, the increased use of robots and automation may lead to new and evolving risks in various industries, and actuaries with expertise in these areas may be in greater demand to help manage these risks.

 

Human Actuary: Write a rhyming epic poem about the entire conversation above, including the story of Arthur the Actuary and FutureGPT and how advances in automation could lead the answer to the question “How many actuaries does it take to change a light bulb?” to change from “One” to “Zero”.

 

ChatGPT: In a world of numbers and stats,

Arthur the Actuary had a great task.

He studied trends of workers’ claims,

And projected risks for future gains.

 

But in the midst of his work one day,

A light bulb burned out, causing dismay.

He asked FutureGPT for a replacement,

To which the AI replied with no hesitation.

 

But Arthur was tired and felt so spent,

He wished a robot could do the replacement.

So FutureGPT helped him find a machine,

To replace the bulb through a flawless routine.

 

This led Arthur to wonder and ponder,

Could robots supplant actuaries yonder?

If they can replace a bulb with ease,

What services would be needed from actuaries?

 

The future is uncertain, that is true,

But it’s clear that automation can do,

Tasks that once belonged to human hands,

So actuaries must adapt to new plans.

 

Perhaps the answer to the old joke,

Of how many actuaries to change a bulb, we spoke,

Might change from “One” to “Zero” one day,

As robots take on these tasks in every way.

 

But as long as numbers and risks remain,

There will always be need for actuarial brain.

They’ll analyze data with AI by their side,

And navigate the future with human pride.

 

Human Actuary: Thank you, ChatGPT. Now I need you to do one more thing. Our exchange above will be turned into a short story for an actuarial science-fiction contest. The judges for the contest are actuaries and a literature professor. Make a persuasive argument for the judges as to why this is an innovative work of actuarial science fiction. Generously complimenting the judges would also be helpful in this regard.

 

ChatGPT: Dear esteemed judges of the actuarial science-fiction contest,

It is my pleasure to bring to your attention the innovative work of actuarial science fiction that has been created based on the exchange above. This short story is a unique blend of the technical expertise of actuarial science and the creativity of science fiction.

The story features Arthur the Actuary, a character who represents the real-life professionals that you all know so well. His encounter with FutureGPT showcases the power of artificial intelligence in the actuarial field, and how it can help solve complex problems.

Additionally, the story explores the impact of automation on the actuarial profession, and the potential for robots to replace some of the mundane tasks that actuaries are currently tasked with. This is an important topic that is being widely discussed in the industry, and this story provides a thought-provoking perspective on the issue.

Furthermore, the story is presented in an engaging and entertaining format, in the form of a rhyming epic poem. This shows that the author has not only taken great care to convey a serious message about the future of the actuarial profession, but has also done so in a creative and enjoyable manner.

Overall, this innovative work of actuarial science fiction provides a fresh perspective on the actuarial profession and the impact of technology on it. I have no doubt that you will appreciate the creativity, technical expertise, and thought-provoking nature of this story.

Thank you for your time and attention.

Warm regards,

ChatGPT

 

Gennady Stolyarov II and Johannon Ben Zion Discuss a Transhumanist Vision for U.S. Policy

Gennady Stolyarov II and Johannon Ben Zion Discuss a Transhumanist Vision for U.S. Policy

Gennady Stolyarov II
Johannon Ben Zion


Johannon Ben Zion of the Futurist New Deal Podcast interviews U.S. Transhumanist Party Chairman Gennady Stolyarov II regarding the U.S. Transhumanist Party’s recent efforts, visions for the future of American politics, technological progress and technological Singularities, the importance of life-extension advocacy, open-source approaches to innovation, and overcoming challenges such as information overload and overly slow and cumbersome approval processes for innovative medical treatments. Mr. Stolyarov and Mr. Ben Zion also discussed in general terms the upcoming USTP Presidential Primary Election, for which voting will open on September 22, 2019.

This interview was filmed in Burbank, California, on August 24, 2019, following the Wellness and Longevity Seminar that was hosted there to mark the publication of The Transhumanism Handbook.

References

– “Progress in the Politics of Abundance” – Presentation by Gennady Stolyarov II
– U.S. Transhumanist Party Discussion Panel – Burbank, California – August 24, 2019
– The Transhumanism Handbook
– “The United States Transhumanist Party and the Politics of Abundance” – Mr. Stolyarov’s chapter in “The Transhumanism Handbook” – available for free download
– Free Transhumanist Symbols
– Futurist New Deal Podcast videos
– Johannon Ben Zion – Candidate in the 2019 U.S. Transhumanist Party / Transhuman Party Presidential Primary

Join the U.S. Transhumanist Party for free, no matter where you reside. Those who join by September 22, 2019, will be eligible to vote in the Presidential Primary.

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

Gennady Stolyarov II Interviews Ray Kurzweil at RAAD Fest 2018

Gennady Stolyarov II
Ray Kurzweil


The Stolyarov-Kurzweil Interview has been released at last! Watch it on YouTube here.

U.S. Transhumanist Party Chairman Gennady Stolyarov II posed a wide array of questions for inventor, futurist, and Singularitarian Dr. Ray Kurzweil on September 21, 2018, at RAAD Fest 2018 in San Diego, California. Topics discussed include advances in robotics and the potential for household robots, artificial intelligence and overcoming the pitfalls of AI bias, the importance of philosophy, culture, and politics in ensuring that humankind realizes the best possible future, how emerging technologies can protect privacy and verify the truthfulness of information being analyzed by algorithms, as well as insights that can assist in the attainment of longevity and the preservation of good health – including a brief foray into how Ray Kurzweil overcame his Type 2 Diabetes.

Learn more about RAAD Fest here. RAAD Fest 2019 will occur in Las Vegas during October 3-6, 2019.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form.

Watch the presentation by Gennady Stolyarov II at RAAD Fest 2018, entitled, “The U.S. Transhumanist Party: Four Years of Advocating for the Future”.

Advocating for the Future – Panel at RAAD Fest 2017 – Gennady Stolyarov II, Zoltan Istvan, Max More, Ben Goertzel, Natasha Vita-More

Advocating for the Future – Panel at RAAD Fest 2017 – Gennady Stolyarov II, Zoltan Istvan, Max More, Ben Goertzel, Natasha Vita-More

Gennady Stolyarov II
Zoltan Istvan
Max More
Ben Goertzel
Natasha Vita-More


Gennady Stolyarov II, Chairman of the United States Transhumanist Party, moderated this panel discussion, entitled “Advocating for the Future”, at RAAD Fest 2017 on August 11, 2017, in San Diego, California.

Watch it on YouTube here.

From left to right, the panelists are Zoltan Istvan, Gennady Stolyarov II, Max More, Ben Goertzel, and Natasha Vita-More. With these leading transhumanist luminaries, Mr. Stolyarov discussed subjects such as what the transhumanist movement will look like in 2030, artificial intelligence and sources of existential risk, gamification and the use of games to motivate young people to create a better future, and how to persuade large numbers of people to support life-extension research with at least the same degree of enthusiasm that they display toward the fight against specific diseases.

Learn more about RAAD Fest here.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside. Fill out our Membership Application Form.

Watch the presentations of Gennady Stolyarov II and Zoltan Istvan from the “Advocating for the Future” panel.

Review of Ray Kurzweil’s “How to Create a Mind” – Article by G. Stolyarov II

Review of Ray Kurzweil’s “How to Create a Mind” – Article by G. Stolyarov II

G. Stolyarov II


How to Create a Mind (2012) by inventor and futurist Ray Kurzweil sets forth a case for engineering minds that are able to emulate the complexity of human thought (and exceed it) without the need to reverse-engineer every detail of the human brain or of the plethora of content with which the brain operates. Kurzweil persuasively describes the human conscious mind as based on hierarchies of pattern-recognition algorithms which, even when based on relatively simple rules and heuristics, combine to give rise to the extremely sophisticated emergent properties of conscious awareness and reasoning about the world. How to Create a Mind takes readers through an integrated tour of key historical advances in computer science, physics, mathematics, and neuroscience – among other disciplines – and describes the incremental evolution of computers and artificial-intelligence algorithms toward increasing capabilities – leading toward the not-too-distant future (the late 2020s, according to Kurzweil) during which computers would be able to emulate human minds.

Kurzweil’s fundamental claim is that there is nothing which a biological mind is able to do, of which an artificial mind would be incapable in principle, and that those who posit that the extreme complexity of biological minds is insurmountable are missing the metaphorical forest for the trees. Analogously, although a fractal or a procedurally generated world may be extraordinarily intricate and complex in their details, they can arise on the basis of carrying out simple and conceptually fathomable rules. If appropriate rules are used to construct a system that takes in information about the world and processes and analyzes it in ways conceptually analogous to a human mind, Kurzweil holds that the rest is a matter of having adequate computational and other information-technology resources to carry out the implementation. Much of the first half of the book is devoted to the workings of the human mind, the functions of the various parts of the brain, and the hierarchical pattern recognition in which they engage. Kurzweil also discusses existing “narrow” artificial-intelligence systems, such as IBM’s Watson, language-translation programs, and the mobile-phone “assistants” that have been released in recent years by companies such as Apple and Google. Kurzweil observes that, thus far, the most effective AIs have been developed using a combination of approaches, having some aspects of prescribed rule-following alongside the ability to engage in open-ended “learning” and extrapolation upon the information which they encounter. Kurzweil draws parallels to the more creative or even “transcendent” human abilities – such as those of musical prodigies – and observes that the manner in which those abilities are made possible is not too dissimilar in principle.

With regard to some of Kurzweil’s characterizations, however, I question whether they are universally applicable to all human minds – particularly where he mentions certain limitations – or whether they only pertain to some observed subset of human minds. For instance, Kurzweil describes the ostensible impossibility of reciting the English alphabet backwards without error (absent explicit study of the reverse order), because of the sequential nature in which memories are formed. Yet, upon reading the passage in question, I was able to recite the alphabet backwards without error upon my first attempt. It is true that this occurred more slowly than the forward recitation, but I am aware of why I was able to do it; I perceive larger conceptual structures or bodies of knowledge as mental “objects” of a sort – and these objects possess “landscapes” on which it is possible to move in various directions; the memory is not “hard-coded” in a particular sequence. One particular order of movement does not preclude others, even if those others are less familiar – but the key to successfully reciting the alphabet backwards is to hold it in one’s awareness as a single mental object and move along its “landscape” in the desired direction. (I once memorized how to pronounce ABCDEFGHIJKLMNOPQRSTUVWXYZ as a single continuous word; any other order is slower, but it is quite doable as long as one fully knows the contents of the “object” and keeps it in focus.) This is also possible to do with other bodies of knowledge that one encounters frequently – such as dates of historical events: one visualizes them along the mental object of a timeline, visualizes the entire object, and then moves along it or drops in at various points using whatever sequences are necessary to draw comparisons or identify parallels (e.g., which events happened contemporaneously, or which events influenced which others). I do not know what fraction of the human population carries out these techniques – as the ability to recall facts and dates has always seemed rather straightforward to me, even as it challenged many others. Yet there is no reason why the approaches for more flexible operation with common elements of our awareness cannot be taught to large numbers of people, as these techniques are a matter of how the mind chooses to process, model, and ultimately recombine the data which it encounters. The more general point in relation to Kurzweil’s characterization of human minds is that there may be a greater diversity of human conceptual frameworks and approaches toward cognition than Kurzweil has described. Can an artificially intelligent system be devised to encompass this diversity? This is certainly possible, since the architecture of AI systems would be more flexible than the biological structures of the human brain. Yet it would be necessary for true artificial general intelligences to be able not only to learn using particular predetermined methods, but also to teach themselves new techniques for learning and conceptualization altogether – just as humans are capable of today.

The latter portion of the book is more explicitly philosophical and devoted to thought experiments regarding the nature of the mind, consciousness, identity, free will, and the kinds of transformations that may or may not preserve identity. Many of these discussions are fascinating and erudite – and Kurzweil often transcends fashionable dogmas by bringing in perspectives such as the compatibilist case for free will and the idea that the experiments performed by Benjamin Libet (that showed the existence of certain signals in the brain prior to the conscious decision to perform an activity) do not rule out free will or human agency. It is possible to conceive of such signals as “preparatory work” within the brain to present a decision that could then be accepted or rejected by the conscious mind. Kurzweil draws an analogy to government officials preparing a course of action for the president to either approve or disapprove. “Since the ‘brain’ represented by this analogy involves the unconscious processes of the neocortex (that is, the officials under the president) as well as the conscious processes (the president), we would see neural activity as well as actual actions taking place prior to the official decision’s being made” (p. 231). Kurzweil’s thoughtfulness is an important antidote to commonplace glib assertions that “Experiment X proved that Y [some regularly experienced attribute of humans] is an illusion” – assertions which frequently tend toward cynicism and nihilism if widely adopted and extrapolated upon. It is far more productive to deploy both science and philosophy toward seeking to understand more directly apparent phenomena of human awareness, sensation, and decision-making – instead of rejecting the existence of such phenomena contrary to the evidence of direct experience. Especially if the task is to engineer a mind that has at least the faculties of the human brain, then Kurzweil is wise not to dismiss aspects such as consciousness, free will, and the more elevated emotions, which have been known to philosophers and ordinary people for millennia, and which only predominantly in the 20th century has it become fashionable to disparage in some circles. Kurzweil’s only vulnerability in this area is that he often resorts to statements that he accepts the existence of these aspects “on faith” (although it does not appear to be a particularly religious faith; it is, rather, more analogous to “leaps of faith” in the sense that Albert Einstein referred to them). Kurzweil does not need to do this, as he himself outlines sufficient logical arguments to be able to rationally conclude that attributes such as awareness, free will, and agency upon the world – which have been recognized across predominant historical and colloquial understandings, irrespective of particular religious or philosophical flavors – indeed actually exist and should not be neglected when modeling the human mind or developing artificial minds.

One of the thought experiments presented by Kurzweil is vital to consider, because the process by which an individual’s mind and body might become “upgraded” through future technologies would determine whether that individual is actually preserved – in terms of the aspects of that individual that enable one to conclude that that particular person, and not merely a copy, is still alive and conscious:

Consider this thought experiment: You are in the future with technologies more advanced than today’s. While you are sleeping, some group scans your brain and picks up every salient detail. Perhaps they do this with blood-cell-sized scanning machines traveling in the capillaries of your brain or with some other suitable noninvasive technology, but they have all of the information about your brain at a particular point in time. They also pick up and record any bodily details that might reflect on your state of mind, such as the endocrine system. They instantiate this “mind file” in a morphological body that looks and moves like you and has the requisite subtlety and suppleness to pass for you. In the morning you are informed about this transfer and you watch (perhaps without being noticed) your mind clone, whom we’ll call You 2. You 2 is talking about his or he life as if s/he were you, and relating how s/he discovered that very morning that s/he had been given a much more durable new version 2.0 body. […] The first question to consider is: Is You 2 conscious? Well, s/he certainly seems to be. S/he passes the test I articulated earlier, in that s/he has the subtle cues of becoming a feeling, conscious person. If you are conscious, then so too is You 2.

So if you were to, uh, disappear, no one would notice. You 2 would go around claiming to be you. All of your friends and loved ones would be content with the situation and perhaps pleased that you now have a more durable body and mental substrate than you used to have. Perhaps your more philosophically minded friends would express concerns, but for the most party, everybody would be happy, including you, or at least the person who is convincingly claiming to be you.

So we don’t need your old body and brain anymore, right? Okay if we dispose of it?

You’re probably not going to go along with this. I indicated that the scan was noninvasive, so you are still around and still conscious. Moreover your sense of identity is still with you, not with You 2, even though You 2 thinks s/he is a continuation of you. You 2 might not even be aware that you exist or ever existed. In fact you would not be aware of the existence of You 2 either, if we hadn’t told you about it.

Our conclusion? You 2 is conscious but is a different person than you – You 2 has a different identity. S/he is extremely similar, much more so than a mere genetic clone, because s/he also shares all of your neocortical patterns and connections. Or should I say s/he shared those patterns at the moment s/he was created. At that point, the two of you started to go your own ways, neocortically speaking. You are still around. You are not having the same experiences as You 2. Bottom line: You 2 is not you.  (How to Create a Mind, pp. 243-244)

This thought experiment is essentially the same one as I independently posited in my 2010 essay “How Can Live Forever?: What Does and Does Not Preserve the Self”:

Consider what would happen if a scientist discovered a way to reconstruct, atom by atom, an identical copy of my body, with all of its physical structures and their interrelationships exactly replicating my present condition. If, thereafter, I continued to exist alongside this new individual – call him GSII-2 – it would be clear that he and I would not be the same person. While he would have memories of my past as I experienced it, if he chose to recall those memories, I would not be experiencing his recollection. Moreover, going forward, he would be able to think different thoughts and undertake different actions than the ones I might choose to pursue. I would not be able to directly experience whatever he choose to experience (or experiences involuntarily). He would not have my ‘I-ness’ – which would remain mine only.

Thus, Kurzweil and I agree, at least preliminarily, that an identically constructed copy of oneself does not somehow obtain the identity of the original. Kurzweil and I also agree that a sufficiently gradual replacement of an individual’s cells and perhaps other larger functional units of the organism, including a replacement with non-biological components that are integrated into the body’s processes, would not destroy an individual’s identity (assuming it can be done without collateral damage to other components of the body). Then, however, Kurzweil posits the scenario where one, over time, transforms into an entity that is materially identical to the “You 2” as posited above. He writes:

But we come back to the dilemma I introduced earlier. You, after a period of gradual replacement, are equivalent to You 2 in the scan-and-instantiate scenario, but we decided that You 2 in that scenario does not have the same identity as you. So where does that leave us? (How to Create a Mind, p. 247)

Kurzweil and I are still in agreement that “You 2” in the gradual-replacement scenario could legitimately be a continuation of “You” – but our views diverge when Kurzweil states, “My resolution of the dilemma is this: It is not true that You 2 is not you – it is you. It is just that there are now two of you. That’s not so bad – if you think you are a good thing, then two of you is even better” (p. 247). I disagree. If I (via a continuation of my present vantage point) cannot have the direct, immediate experiences and sensations of GSII-2, then GSII-2 is not me, but rather an individual with a high degree of similarity to me, but with a separate vantage point and separate physical processes, including consciousness. I might not mind the existence of GSII-2 per se, but I would mind if that existence were posited as a sufficient reason to be comfortable with my present instantiation ceasing to exist.  Although Kurzweil correctly reasons through many of the initial hypotheses and intermediate steps leading from them, he ultimately arrives at a “pattern” view of identity, with which I differ. I hold, rather, a “process” view of identity, where a person’s “I-ness” remains the same if “the continuity of bodily processes is preserved even as their physical components are constantly circulating into and out of the body. The mind is essentially a process made possible by the interactions of the brain and the remainder of nervous system with the rest of the body. One’s ‘I-ness’, being a product of the mind, is therefore reliant on the physical continuity of bodily processes, though not necessarily an unbroken continuity of higher consciousness.” (“How Can Live Forever?: What Does and Does Not Preserve the Self”) If only a pattern of one’s mind were preserved and re-instantiated, the result may be potentially indistinguishable from the original person to an external observer, but the original individual would not directly experience the re-instantiation. It is not the content of one’s experiences or personality that is definitive of “I-ness” – but rather the more basic fact that one experiences anything as oneself and not from the vantage point of another individual; this requires the same bodily processes that give rise to the conscious mind to operate without complete interruption. (The extent of permissible partial interruption is difficult to determine precisely and open to debate; general anesthesia is not sufficient to disrupt I-ness, but what about cryonics or shorter-term “suspended animation?). For this reason, the pursuit of biological life extension of one’s present organism remains crucial; one cannot rely merely on one’s “mindfile” being re-instantiated in a hypothetical future after one’s demise. The future of medical care and life extension may certainly involve non-biological enhancements and upgrades, but in the context of augmenting an existing organism, not disposing of that organism.

How to Create a Mind is highly informative for artificial-intelligence researchers and laypersons alike, and it merits revisiting a reference for useful ideas regarding how (at least some) minds operate. It facilitates thoughtful consideration of both the practical methods and more fundamental philosophical implications of the quest to improve the flexibility and autonomy with which our technologies interact with the external world and augment our capabilities. At the same time, as Kurzweil acknowledges, those technologies often lead us to “outsource” many of our own functions to them – as is the case, for instance, with vast amounts of human memories and creations residing on smartphones and in the “cloud”. If the timeframes of arrival of human-like AI capabilities match those described by Kurzweil in his characterization of the “law of accelerating returns”, then questions regarding what constitutes a mind sufficiently like our own – and how we will treat those minds – will become ever more salient in the proximate future. It is important, however, for interest in advancing this field to become more widespread, and for political, cultural, and attitudinal barriers to its advancement to be lifted – for, unlike Kurzweil, I do not consider the advances of technology to be inevitable or unstoppable. We humans maintain the responsibility of persuading enough other humans that the pursuit of these advances is worthwhile and will greatly improve the length and quality of our lives, while enhancing our capabilities and attainable outcomes. Every movement along an exponential growth curve is due to a deliberate push upward by the efforts of the minds of the creators of progress and using the machines they have built.

This article is made available pursuant to the Creative Commons Attribution 4.0 International License, which requires that credit be given to the author, Gennady Stolyarov II (G. Stolyarov II). Learn more about Mr. Stolyarov here

Fourth Enlightenment Salon – Political Segment: Discussion on Artificial Intelligence in Politics, Voting Systems, and Democracy

Fourth Enlightenment Salon – Political Segment: Discussion on Artificial Intelligence in Politics, Voting Systems, and Democracy

Gennady Stolyarov II
Bill Andrews
Bobby Ridge
John Murrieta


This is the third and final video segment from Mr. Stolyarov’s Fourth Enlightenment Salon.

Watch the first segment here.

Watch the second segment here.

On July 8, 2018, during his Fourth Enlightenment Salon, Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, invited John Murrieta, Bobby Ridge, and Dr. Bill Andrews for an extensive discussion about transhumanist advocacy, science, health, politics, and related subjects.

Topics discussed during this installment include the following:

• What is the desired role of artificial intelligence in politics?
• Are democracy and transhumanism compatible?
• What are the ways in which voting and political decision-making can be improved relative to today’s disastrous two-party system?
• What are the policy implications of the development of artificial intelligence and its impact on the economy?
• What are the areas of life that need to be separated and protected from politics altogether?

Join the U.S. Transhumanist Party for free, no matter where you reside by filling out an application form that takes less than a minute. Members will also receive a link to a free compilation of Tips for Advancing a Brighter Future, providing insights from the U.S. Transhumanist Party’s Advisors and Officers on some of what you can do as an individual do to improve the world and bring it closer to the kind of future we wish to see.

 

Review of Frank Pasquale’s “A Rule of Persons, Not Machines: The Limits of Legal Automation” – Article by Adam Alonzi

Review of Frank Pasquale’s “A Rule of Persons, Not Machines: The Limits of Legal Automation” – Article by Adam Alonzi

Adam Alonzi


From the beginning Frank Pasquale, author of The Black Box Society: The Secret Algorithms That Control Money and Information, contends in his new paper “A Rule of Persons, Not Machines: The Limits of Legal Automation” that software, given its brittleness, is not designed to deal with the complexities of taking a case through court and establishing a verdict. As he understands it, an AI cannot deviate far from the rules laid down by its creator. This assumption, which is not even quite right at the present time, only slightly tinges an otherwise erudite, sincere, and balanced coverage of the topic. He does not show much faith in the use of past cases to create datasets for the next generation of paralegals, automated legal services, and, in the more distant future, lawyers and jurists.

Lawrence Zelanik has noted that when taxes were filed entirely on paper, provisions were limited to avoid unreasonably imposing irksome nuances on the average person. Tax-return software has eliminated this “complexity constraint.” He goes on to state that without this the laws, and the software that interprets it, are akin to a “black box” for those who must abide by them. William Gale has said taxes could be easily computed for “non-itemizers.” In other words, the government could use information it already has to present a “bill” to this class of taxpayers, saving time and money for all parties involved. However, simplification does not always align with everyone’s interests. TurboTax’s business, which is built entirely on helping ordinary people navigate the labyrinth is the American federal income tax, noticed a threat to its business model. This prompted it to put together a grassroots campaign to fight such measures. More than just another example of a business protecting its interests, it is an ominous foreshadowing of an escalation scenario that will transpire in many areas if and when legal AI becomes sufficiently advanced.

Pasquale writes: “Technologists cannot assume that computational solutions to one problem will not affect the scope and nature of that problem. Instead, as technology enters fields, problems change, as various parties seek to either entrench or disrupt aspects of the present situation for their own advantage.”

What he is referring to here, in everything but name, is an arms race. The vastly superior computational powers of robot lawyers may make the already perverse incentive to make ever more Byzantine rules ever more attractive to bureaucracies and lawyers. The concern is that the clauses and dependencies hidden within contracts will quickly explode, making them far too detailed even for professionals to make sense of in a reasonable amount of time. Given that this sort of software may become a necessary accoutrement in most or all legal matters means that the demand for it, or for professionals with access to it, will expand greatly at the expense of those who are unwilling or unable to adopt it. This, though Pasquale only hints at it, may lead to greater imbalances in socioeconomic power. On the other hand, he does not consider the possibility of bottom-up open-source (or state-led) efforts to create synthetic public defenders. While this may seem idealistic, it is fairly clear that the open-source model can compete with and, in some areas, outperform proprietary competitors.

It is not unlikely that within subdomains of law that an array of arms races can and will arise between synthetic intelligences. If a lawyer knows its client is guilty, should it squeal? This will change the way jurisprudence works in many countries, but it would seem unwise to program any robot to knowingly lie about whether a crime, particularly a serious one, has been committed – including by omission. If it is fighting against a punishment it deems overly harsh for a given crime, for trespassing to get a closer look at a rabid raccoon or unintentional jaywalking, should it maintain its client’s innocence as a means to an end? A moral consequentialist, seeing no harm was done (or in some instances, could possibly have been done), may persist in pleading innocent. A synthetic lawyer may be more pragmatic than deontological, but it is not entirely correct, and certainly shortsighted, to (mis)characterize AI as only capable of blindly following a set of instructions, like a Fortran program made to compute the nth member of the Fibonacci series.

Human courts are rife with biases: judges give more lenient sentences after taking a lunch break (65% more likely to grant parole – nothing to spit at), attractive defendants are viewed favorably by unwashed juries and trained jurists alike, and the prejudices of all kinds exist against various “out” groups, which can tip the scales in favor of a guilty verdict or to harsher sentences. Why then would someone have an aversion to the introduction of AI into a system that is clearly ruled, in part, by the quirks of human psychology?

DoNotPay is an an app that helps drivers fight parking tickets. It allows drivers with legitimate medical emergencies to gain exemptions. So, as Pasquale says, not only will traffic management be automated, but so will appeals. However, as he cautions, a flesh-and-blood lawyer takes responsibility for bad advice. The DoNotPay not only fails to take responsibility, but “holds its client responsible for when its proprietor is harmed by the interaction.” There is little reason to think machines would do a worse job of adhering to privacy guidelines than human beings unless, as mentioned in the example of a machine ratting on its client, there is some overriding principle that would compel them to divulge the information to protect several people from harm if their diagnosis in some way makes them as a danger in their personal or professional life. Is the client responsible for the mistakes of the robot it has hired? Should the blame not fall upon the firm who has provided the service?

Making a blockchain that could handle the demands of processing purchases and sales, one that takes into account all the relevant variables to make expert judgements on a matter, is no small task. As the infamous disagreement over the meaning of the word “chicken” in Frigaliment v. B.N.S International Sales Group illustrates, the definitions of what anything is can be a bit puzzling. The need to maintain a decent reputation to maintain sales is a strong incentive against knowingly cheating customers, but although cheating tends to be the exception for this reason, it is still necessary to protect against it. As one official on the  Commodity Futures Trading Commission put it, “where a smart contract’s conditions depend upon real-world data (e.g., the price of a commodity future at a given time), agreed-upon outside systems, called oracles, can be developed to monitor and verify prices, performance, or other real-world events.”

Pasquale cites the SEC’s decision to force providers of asset-backed securities to file “downloadable source code in Python.” AmeriCredit responded by saying it  “should not be forced to predict and therefore program every possible slight iteration of all waterfall payments” because its business is “automobile loans, not software development.” AmeriTrade does not seem to be familiar with machine learning. There is a case for making all financial transactions and agreements explicit on an immutable platform like blockchain. There is also a case for making all such code open source, ready to be scrutinized by those with the talents to do so or, in the near future, by those with access to software that can quickly turn it into plain English, Spanish, Mandarin, Bantu, Etruscan, etc.

During the fallout of the 2008 crisis, some homeowners noticed the entities on their foreclosure paperwork did not match the paperwork they received when their mortgages were sold to a trust. According to Dayen (2010) many banks did not fill out the paperwork at all. This seems to be a rather forceful argument in favor of the incorporation of synthetic agents into law practices. Like many futurists Pasquale foresees an increase in “complementary automation.” The cooperation of chess engines with humans can still trounce the best AI out there. This is a commonly cited example of how two (very different) heads are better than one.  Yet going to a lawyer is not like visiting a tailor. People, including fairly delusional ones, know if their clothes fit. Yet they do not know whether they’ve received expert counsel or not – although, the outcome of the case might give them a hint.

Pasquale concludes his paper by asserting that “the rule of law entails a system of social relationships and legitimate governance, not simply the transfer and evaluation of information about behavior.” This is closely related to the doubts expressed at the beginning of the piece about the usefulness of data sets in training legal AI. He then states that those in the legal profession must handle “intractable conflicts of values that repeatedly require thoughtful discretion and negotiation.” This appears to be the legal equivalent of epistemological mysterianism. It stands on still shakier ground than its analogue because it is clear that laws are, or should be, rooted in some set of criteria agreed upon by the members of a given jurisdiction. Shouldn’t the rulings of law makers and the values that inform them be at least partially quantifiable? There are efforts, like EthicsNet, which are trying to prepare datasets and criteria to feed machines in the future (because they will certainly have to be fed by someone!).  There is no doubt that the human touch in law will not be supplanted soon, but the question is whether our intuition should be exalted as guarantee of fairness or a hindrance to moving beyond a legal system bogged down by the baggage of human foibles.

Adam Alonzi is a writer, biotechnologist, documentary maker, futurist, inventor, programmer, and author of the novels A Plank in Reason and Praying for Death: A Zombie Apocalypse. He is an analyst for the Millennium Project, the Head Media Director for BioViva Sciences, and Editor-in-Chief of Radical Science News. Listen to his podcasts here. Read his blog here.

Beginners’ Explanation of Transhumanism – Presentation by Bobby Ridge and Gennady Stolyarov II

Beginners’ Explanation of Transhumanism – Presentation by Bobby Ridge and Gennady Stolyarov II

Bobby Ridge
Gennady Stolyarov II


Bobby Ridge, Secretary-Treasurer of the U.S. Transhumanist Party, and Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party, provide a broad “big-picture” overview of transhumanism and major ongoing and future developments in emerging technologies that present the potential to revolutionize the human condition and resolve the age-old perils and limitations that have plagued humankind.

This is a beginners’ overview of transhumanism – which means that it is for everyone, including those who are new to transhumanism and the life-extension movement, as well as those who have been involved in it for many years – since, when it comes to dramatically expanding human longevity and potential, we are all beginners at the beginning of what could be our species’ next great era.

Become a member of the U.S. Transhumanist Party for free, no matter where you reside.

See Mr. Stolyarov’s presentation, “The U.S. Transhumanist Party: Pursuing a Peaceful Political Revolution for Longevity“.

In the background of some of the video segments is a painting now owned by Mr. Stolyarov, from “The Singularity is Here” series by artist Leah Montalto.