Author Topic: Bayesian reasoning  (Read 2042 times)

PeteD01

  • Handlebar Stache
  • *****
  • Posts: 1395
Bayesian reasoning
« on: January 18, 2024, 12:32:40 PM »
Just came across this article which might be of interest to the group.

(I used to give my trainees two papers about Bayesian interpretation of biomedical research as required reading and I have put the links to them at the very bottom of the post.)

Considering that our nervous system operates as a Bayesian calculator right down to the individual neuron, it probably is a good idea to become familiar with the subject in the age of misinformation, conspiracy theories, Covid testing, and "artificial intelligence"; this article is a good introduction into the subject:


How to think like a Bayesian
In a world of few absolutes, it pays to be able to think clearly about probabilities. These five ideas will get you started
by Michael G Titelbaum
10 JANUARY 2024

We know from many years of studies that reasoning with probabilities is hard. Most of us are raised to reason in all-or-nothing terms. We’re quite capable of expressing intermediate degrees of confidence about events (quick: how confident are you that a Democrat will win the next presidential election?), but we’re very bad at reasoning with those probabilities. Over and over, studies have revealed systematic errors in ordinary people’s probabilistic thinking.

Luckily, there once lived a guy named the Reverend Thomas Bayes. His work on probability mathematics in the 18th century inspired a movement we now call Bayesian statistics. You may have heard ‘Bayesian’ talk thrown around in conversation, or mentioned in news articles. At its heart, Bayesianism is a toolkit for reasoning with probabilities. It tells you how to measure levels of confidence numerically, how to test those levels to see if they make sense, and then how to manage them over time.


https://psyche.co/guides/how-to-think-like-a-bayesian-and-make-better-decisions



Toward evidence-based medical statistics. 1: The P value fallacy
S N Goodman
Ann Intern Med. 1999 Jun 15;130(12):995-1004.

https://pubmed.ncbi.nlm.nih.gov/10383371/

http://inger.gob.mx/pluginfile.php/96260/mod_resource/content/355/Archivos/C_Metodologia/MODULO_2/1.%20Toward%20Evidence-Based%20Medical%20Statistics.%201%20The%20P%20Value%20Fallacy.pdf



Toward evidence-based medical statistics. 2: The Bayes factor
S N Goodman
Ann Intern Med. 1999 Jun 15;130(12):1005-13.

https://pubmed.ncbi.nlm.nih.gov/10383350/

https://courses.botany.wisc.edu/botany_940/06EvidEvol/papers/goodman2.pdf
« Last Edit: March 29, 2024, 05:04:13 AM by PeteD01 »

Financial.Velociraptor

  • Handlebar Stache
  • *****
  • Posts: 2176
  • Age: 51
  • Location: Houston TX
  • Devour your prey raptors!
    • Living Universe Foundation
Re: Bayesian reasoning
« Reply #1 on: January 19, 2024, 08:08:48 AM »
Bayesian statistics are sort of a weird bunny, IMO.  It is useful to know the "priors" if you are otherwise stabbing in the dark and need to know which 'ballpark' to look in.  But it by definition adds an element of bias to your analysis.  I had a stat prof in Grad School who though Bayes was basically The Devil....

PeteD01

  • Handlebar Stache
  • *****
  • Posts: 1395
Re: Bayesian reasoning
« Reply #2 on: January 19, 2024, 02:14:01 PM »
Bayesian statistics are sort of a weird bunny, IMO.  It is useful to know the "priors" if you are otherwise stabbing in the dark and need to know which 'ballpark' to look in.  But it by definition adds an element of bias to your analysis.  I had a stat prof in Grad School who though Bayes was basically The Devil....

Speaking of the devil - here are some thoughts:

Bayesian reasoning is a process, not a series of discrete operations that can be evaluated while disregarding the history which resulted in the state that constitutes the "prior" that goes into the estimation of the "posterior", which of course is again the new "prior" - and so on.

As such, the "prior" will differ between evaluators as they have different histories and they may also differ in the assessment of the strength of evidence.
In addition, tacit knowledge also goes into the evaluation and that is by definition not easily analyzed.

So it looks like that Bayesian reasoning lacks rigor and precision.
But then we have to ask what the lack of rigor and precision is compared to - and here things become a bit weird.

If we look at a single neuron, we see that it consists of a cell body and a kind of biological wire capable of carrying a frequency modulated output to other cells, including other neurons. This "wire" is called an axon and the individual membrane potentials are called action potentials.
The axon ends in a multitude of synapses that release small vesicles of neurotransmitters that influence the membrane potential of the target cell, and if that target cell is another neuron, it will modulate the probability of the next action potential that will then travel again via the axon and so on.

The probability of an action potential to be either delayed or accelerated is a function of the excitatory and inhibitory inputs a neuron receives and of its own state when the inputs were integrated - the "prior", as the state is determined by previous inputs that themselves are derived from the states of other groups of neurons.
Even in this simplified view, it is clear that the "prior" is the result of the state of a network and not just a local phenomenon.

The rapid depolarization and propagation that characterize an action potential is an all or nothing phenomenon and can be seen as being "digital" although the rate modulation around the base rate of firing is analog again, whereas the continuous modulation of membrane potential is analog and infinitely variable within its constraints.

Now consider that the connections of a single neuron number in the tens of thousands and neurons number in the billions and it is obvious that the complexity is insane.

So the human brain appears to be made up by structures that are best described as electrochemical hybrid analog/digital information processors that integrate "prior" states with new inputs to generate a rate modulated output.

That is how one would go about if one wanted to construct a Bayesian calculator.

The key here is to understand that this Bayesian calculator is an analog computer. Analog computers are called analog(ue) because they operate as analogues of the real world. Issues with analog computers are the lack of precision and the difficulty in programming such a thing.
The animal nervous system has answered the programming issue with ongoing remodeling throughout life with rewiring, either functional or physical, being a basic feature called neuroplasticity.
Precision is another matter and we´ll see that the ability of dealing effortlessly with infinities (membrane potentials and firing rates are continuous variables that are expressed in real numbers) comes with another sort of precision and with great energetic efficiency.
Although continuous variables are expressed in real numbers, the brain does not deal in numbers but in analogues of real world processes, particularly electrical and chemical analogues - that makes dealing with continuous variables a no energetic cost  deal when compared to simulations of real world processes in a simulated real number space and this is where computers come in.

Digital computers deal in numbers and cannot properly deal with real numbers and are restricted to a subset of real numbers that can be represented by floating point numbers and that also only up to a given precision and size.
Ultimately, digital computers are restricted to simulations of reality with the realism of the simulation determined by the brute force computational effort.
These are hard constraints and increases in number size and in precision come at a high energetic cost - brains made up of neurons operating as reality analogues do not have those constraints and can operate at incredible efficiency.
This is behind the warnings about the energy hungry "AI" world that have to be taken seriously.

Of course, the big question people have been asking is if computers will eventually become conscious.
The idea is that with increasing complexity, new emerging properties appear and consciousness could be one of them.
The trouble here is that throwing more and more computing power at "AI" iterations does not eliminate the constraints on the actual operation of digital computers. It might well be that the simulations become better and indistinguishable from aspects of reality and actually give the impression of increasing complexity - but the problem is that we are still looking at a simulation of increased complexity.
Also, an algorithmic approach proceeding with discrete steps does not exert evolutionary pressure in direction of emerging consciousness whereas, due to the fuzziness of complex self-organizing Bayesian calculators, consciousness may be selected for to deal with conflicting results that are not suppressed and whose salience might actually be constitutive of consciousness.
(It appears that most of the work the brain does is in suppressing neural activity, thus operating in the abductive logical mode, or "eliminating everything but the most likely explanation")

In conclusion, Bayesian analog calculators of the complexity of our nervous system are historical processes that operate continuously (not in stepwise fashion, and even "prior" and "posterior" are kind of smeared out in time giving consciousness the space to operate in) and can handle continuous variables effortlessly.
Digital computers are not capable of that and this is a fundamental difference - no matter how good the simulations are.


Here is something about analog chips and that illustrates how hyped up all that "AI" stuff in respect to machine consciousness really is:

The Unbelievable Zombie Comeback of Analog Computing
Computers have been digital for half a century. Why would anyone want to resurrect the clunkers of yesteryear?
Let's Get Physical

https://www.wired.com/story/unbelievable-zombie-comeback-analog-computing/


And something about computational constraints of digital computers:

Why Computers are Bad at Algebra
PBS Infinite Series

https://www.youtube.com/watch?v=pQs_wx8eoQ8




PeteD01

  • Handlebar Stache
  • *****
  • Posts: 1395
Re: Bayesian reasoning
« Reply #3 on: January 20, 2024, 11:54:24 AM »
Looks like IBM has made some progress in analog "neuromorphic computing".
Digital computing is a dead end when it comes to "AI" as currently hyped and we are years from solving the issue:


Analog Computing for AI: How It Could Make Us Re-Think the Future
Dr. Tehseen Zia Tenured Associate Professor
Last updated: 15 November, 2023

However, the digital computing systems supporting AI has struggled to keep pace, leading to slower training speeds, suboptimal performance, and increased energy consumption. This threatens AI’s future and calls for re-evaluating traditional computing systems.

Thanks to research by IBM, Analog AI emerges as a beacon of hope, offering potential solutions for efficiency and environmental responsibility.




https://www.techopedia.com/analog-computing-for-ai-how-it-could-make-us-re-think-the-future
 
« Last Edit: January 20, 2024, 03:11:30 PM by PeteD01 »

PeteD01

  • Handlebar Stache
  • *****
  • Posts: 1395
Re: Bayesian reasoning
« Reply #4 on: January 24, 2024, 12:14:01 PM »
Looks like the first neuromorphic hybrid analog/digital supercomputer is going to go online in April 2024:

Deep South – A Neuromorphic Supercomputer

Published by Steven Novella
December 14, 2023

By one recent estimate, using a neuromorphic computer to process the same information is 4-16 times more energy efficient than a non-neuromorphic system. If we take an average figure and say that neuromorphic computers would use about 10% of the energy of conventional computers, that is a massive change. There is obviously a cost efficiency here, but also this could be a major efficiency breakthrough in terms of mitigating global warming. The most environmentally friendly energy is the energy you don’t use. It is hard to overestimate the potential benefit here, especially as AI systems are ramping up, complete with massive energy demand.

This relates also to the other benefit of the neuromorphic design – there are some applications where they are computationally more efficient that conventional computers, not just energy efficient but faster and more powerful. You can, theoretically, simulate any computational system virtually, but that can take a massive amount of computing power and is slower. That is basically doing it the hard way (by, ironically, doing it in software rather than hardware). Designing the hardware for the specific application is just more powerful and efficient in every way.

And guess which applications the neuromorphic design is optimal for – many types of AI computing.

Deep South will be operational by April 2024, so we don’t have to wait long to see it come online. I will then want to track how it performs, both in terms of computing power and energy efficiency. If it turns out to be a successful experiment, which I hope it is, then perhaps this will accelerate massive adoption of neuromorphic computing, starting with the applications where it makes the most sense.


https://theness.com/neurologicablog/deep-south-a-neuromorphic-supercomputer/

PeteD01

  • Handlebar Stache
  • *****
  • Posts: 1395
Re: Bayesian reasoning
« Reply #5 on: February 25, 2024, 02:24:42 PM »
I just bought this lecture series which is on sale for $30 and I have a code to get another $15 off, bringing it down to $15 total.

The code is: Q9V9

(sale ends tomorrow 02/26/2024 - if you miss it, there are sales on and off, just subscribe to the newsletter)

I purchased the course because of the discussion of Euclidian geometry which I am not too familiar with. Euclidian geometry and probability/statistics are the two subjects the course is about. I know a little about statistics and I am really impressed by the professor and I think this course is really good for non-STEM people to get a handle on the subject:


Mathematics, Philosophy, and the "Real World"
Judith V. Grabiner, Ph.D. Professor, Pitzer College
Course No. 1440

Course Overview
Mathematics has spread its influence far beyond the realm of numbers. The concepts and methods of mathematics are crucially important to all of culture and affect the way countless people in all spheres of life look at the world. Consider these cases:

When Leonardo da Vinci planned his mural The Last Supper in the 1490s, he employed geometric perspective to create a uniquely striking composition, centered on the head of Jesus.
When Thomas Jefferson sat down to write the Declaration of Independence in 1776, he composed it on the model of a geometric proof, which is what gives it much of its power as a defense of liberty.
When Albert Einstein developed his theory of general relativity in the early 20th century, he used non-Euclidean geometry to prove that the path of a ray of light, in the presence of a gravitational field, is not straight but curved.
Intriguing examples like these reflect the important dialogue between mathematics and philosophy that has flourished throughout history. Indeed, mathematics has consistently helped determine the course of Western philosophical thought. Views about human nature, religion, truth, space and time, and much more have been shaped and honed by the ideas and practices of this vital scientific field.

Award-winning Professor Judith V. Grabiner shows you how mathematics has shaped human thought in profound and exciting ways in Mathematics, Philosophy, and the "Real World," a 36-lecture series that explores mathematical concepts and practices that can be applied to a fascinating range of areas and experiences.

Believing that mathematics should be accessible to any intellectually aware individual, Professor Grabiner has designed a course that is lively and wide-ranging, with no prerequisites beyond high school math. For those with an interest in mathematics, this course is essential to understanding its invaluable impact on the history of philosophical ideas; for those with an interest in philosophy, Professor Grabiner's course reveals just how indebted the field is to the mathematical world.

Math Meets Philosophy

In a presentation that is clear, delightful, and filled with fascinating case histories, Professor Grabiner focuses on two areas of mathematics that are easily followed by the nonspecialist: probability and statistics, and geometry. These play a pivotal role in the lives of ordinary citizens today, when statistical information is everywhere, from medical data to opinion polls to newspaper graphs; and when the logical rules of a geometric proof are a good approach to making any important decision.

Mathematics, Philosophy, and the "Real World" introduces enough elementary probability and statistics so that you understand the subtleties of the all-important bell curve. Then you are immersed in key theorems of Euclid's Elements of Geometry, the 2,200-year-old work that set the standard for logical argument. Throughout the course, Professor Grabiner shows how these fundamental ideas have had an enormous impact in other fields. Notably, mathematics helped stimulate the development of Western philosophy and it has guided philosophical thought ever since, a role that you investigate through thinkers such as these:

Plato: Flourishing in the 4th century B.C.E., Plato was inspired by geometry to argue that reality resides in a perfect world of Forms accessible only to the intellect—just like the ideal circles, triangles, and other shapes that seem to exist only in the mind.
Descartes: Writing in the 17th century, René Descartes used geometric reasoning in a systematic search for all possible truths. In a famous exercise, he doubted everything until he arrived at an irrefutable fact: "I think, therefore I am."
Kant: A century after Descartes, Immanuel Kant argued that metaphysics was possible by showing its kinship with mathematics. The perfection of Euclidean geometry led him to take for granted that space has to be Euclidean.
Einstein: Working in the early 20th century with a concept of "straight lines" that was different from Euclid's, Albert Einstein showed that gravity is a geometric property of non-Euclidean space, which is an essential idea of his general theory of relativity.
Non-Euclidean Geometry Explained

The discovery of non-Euclidean geometry influenced fields beyond mathematics, laying the foundation for new scientific and philosophical theories and also inspiring works by artists such as the Cubists, the Surrealists, and their successors leading up to today.

Non-Euclidean geometry was a stunning intellectual breakthrough in the 19th century, and you study how three mathematicians, working independently, overthrew the belief that Euclid's geometry was the only possible consistent system for dealing with points, lines, surfaces, and solids. Einstein's theory of relativity was just one of the many ideas to draw on the non-Euclidean insight that parallel lines need not be the way Euclid imagined them.
Professor Grabiner prepares the ground for your exploration of non-Euclidean geometry by going carefully over several of Euclid's proofs so that you understand Euclid's theory of parallel lines at a fundamental level. You even venture into the visually rich world of art and architecture to see how Renaissance masters used Euclidean geometry to map three-dimensional space onto flat surfaces and to design buildings embodying geometrical balance and symmetry. The Euclidean picture of space became internalized to a remarkable extent during and after the Renaissance, with a far-reaching effect on the development of philosophy and science.
Change the Way You Think
Mathematics has not only changed the way specialists think about the world, it has given the rest of us an easily understandable set of concepts for analyzing and understanding our surroundings. Professor Grabiner provides a checklist of questions to ask about any statistical or probabilistic data that you may encounter. Her intriguing observations include the following:

Statistics: Biologist and author Stephen Jay Gould, who developed abdominal cancer, was told his disease had an eight-month median survival time after diagnosis. The diagnosis sounded hopeless, but his understanding of the characteristics of the median (as opposed to the mean or mode) gave him a strategy for survival.
Bad graphs: There are many ways to make a bad graph; some deliberately misleading, others merely badly conceived. Beware of a graph that starts at a number higher than zero, since comparisons between different data points on the graph will be exaggerated.
Polls: The Literary Digest poll before the 1936 U.S. presidential election was the largest ever conducted and predicted a landslide win for Alf Landon over Franklin Roosevelt. Yet the result was exactly the opposite due to an unrecognized systematic bias in the polling sample.
Probability: Intuition can lead one astray when one is judging probabilities. You investigate the case of an eyewitness to an accident who has done well on tests of identifying the type of vehicle involved. But a simple calculation shows that she is more likely wrong than not.
The Power of Mathematical Thinking

Mathematics, Philosophy, and the "Real World" focuses on mathematics and its influence on culture in the West. But for an alternative view, Professor Grabiner devotes a lecture to mathematics in classical China, where geometers discovered some of the same results as the ancient Greeks but with a very different approach. One major difference is that the Chinese didn't use indirect proof, a technique that proves a proposition true because the assumption that it is false leads to a contradiction.

In another lecture, Professor Grabiner gives time to the critics of mathematics—philosophers, scientists, poets, and writers who have argued against the misuse of mathematics. Charles Dickens speaks for many in his memorable novel Hard Times, which depicts the human misery brought by Victorian England's obsession with statistics and efficiency.

But even more memorable are the cases in which mathematics turns up where it is least expected. "We hold these truths to be self evident ..." So wrote Thomas Jefferson in the second sentence of the Declaration of Independence. He had originally started, "We hold these truths to be sacred and undeniable ... " The change to "self-evident" was probably made at the suggestion of Benjamin Franklin, a great scientist as well as a statesman, who saw the power of appealing to scientific thinking. A Euclidean proof begins with axioms (self-evident truths) and then moves through a series of logical steps to a conclusion.

With her consummate skill as a teacher, Professor Grabiner shows how Jefferson laid out America's case against Great Britain with all of the rigor he learned in Euclid's Elements, working up to a single, irrefutable conclusion: "That these United Colonies are, and of Right ought to be Free and Independent States."

There is arguably no greater demonstration of the power of mathematics to transform the real world—and it's just one of the fascinating insights you'll find in Mathematics, Philosophy, and the "Real World."


https://www.thegreatcourses.com/courses/mathematics-philosophy-and-the-real-world

PeteD01

  • Handlebar Stache
  • *****
  • Posts: 1395
Re: Bayesian reasoning
« Reply #6 on: February 29, 2024, 09:06:32 AM »
Let´s do some Bayesian reasoning:

Billionaires like Elon Musk have been spreading the idea of an impending "singularity" precipitated by the rapid advancement of AI capabilities to the point of exceeding human intelligence and ability to control the technology.

Musk and others went so far as to ask for a pause in development of AI:

https://futureoflife.org/open-letter/pause-giant-ai-experiments/

Supposedly, the machines would then threaten the survival of humanity as we know it - definitely a sizable bogeyman.
They succeeded in causing a minor media storm about this type of existential risk from AI.

In the open letter, they are mentioning "powerful digital minds" that "might eventually outnumber, outsmart, obsolete and replace us".
Alright, first of all, these things are not "minds" by any stretch of the imagination.
Secondly, artificial minds are not going to be "digital minds" resembling what is being developed currently but more powerful - the reason for that being the limitations inherent to digital computing such as energy inefficiency and other constraints of digital computing that are discussed in previous posts.

The likelihood of digital computers becoming "powerful digital minds" ever is probably in the same ballpark as the little people that live in your TV set staging a revolution to take over the refrigerator.

Mind or consciousness are properties that emerge from complexity which is severely constrained in digital computing.
If ever, properties such as consciousness would emerge from hybrid digital/analog systems on the basis of adaptive electrochemical processes - look to biology not electronics for that (if anything, more Neuralink than OpenAI).

So that doomsday scenario of machines taking over the world any time soon is just silly.

But here is a much more interesting part of the letter:

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Billionaires asking for regulation? That is a new one - but wait, they are asking for "new and capable regulatory authorities dedicated to AI" not for regulation embedded in existing entities.

This makes a lot more sense coming from the perspective of oligarchs routinely engaging in regulatory capture and that are confronted with a serious emerging threat stemming from the confluence of big data, blockchain technology, and AI.

In combination, these technologies can be put to use in True Cost Accounting (TCA).
Raw big data is already plentiful and blockchain technology for safe and secure data sharing between secure digital ledgers also exists.
AI already shines in pattern detection.

Interestingly, ecosystem monitoring and management will likely eventually also use these technologies.

It is TCA that is already technically feasible and has the potential to seriously disrupt global business activities, which often include child/slave labor, genocide and large scale environmental destruction, by exposing them and putting a price tag on them.

Hence, oligarchs have a vested interest in an overarching regulatory agency as a one stop shop for regulatory capture to fight TCA by interfering with existing regulatory agency activity.

The good news is that AI is developing in a decentralized and task-oriented manner not suitable for central control and suppression in a democracy.

The oligarchs and swaths of industry are concerned about having to face accountability due to TCA.

Here is just one example hinting at things to come:


The Big Data, Artificial Intelligence, and Blockchain in True Cost Accounting for Energy Transition in Europe
by Joanna Gusc 1,*ORCID,Peter Bosma 2,Sławomir Jarka 3 andAgnieszka Biernat-Jarka 4

Abstract
The current energy prices do not include the environmental, social, and economic short and long-term external effects. There is a gap in the literature on the decision-making model for the energy transition. True Cost Accounting (TCA) is an accounting management model supporting the decision-making process. This study investigates the challenges and explores how big data, AI, or blockchain could ease the TCA calculation and indirectly contribute to the transition towards more sustainable energy production. The research question addressed is: How can IT help TCA applications in the energy sector in Europe? The study uses qualitative interpretive methodology and is performed in the Netherlands, Germany, and Poland. The findings indicate the technical feasibilities of a big data infrastructure to cope with TCA challenges. The study contributes to the literature by identifying the challenges in TCA application for energy production, showing the readiness potential for big data, AI, and blockchain to tackle them, revealing the need for cooperation between accounting and technical disciplines to enable the energy transition.


https://www.mdpi.com/1996-1073/15/3/1089


Fru-Gal

  • Handlebar Stache
  • *****
  • Posts: 1255
Re: Bayesian reasoning
« Reply #7 on: February 29, 2024, 09:29:03 AM »
Love your thoughtful analysis of this. I believe I have commented in this forum before that Sam Altman should be on everyone’s radar (not to use a war metaphor but yeah) as the new evil guy. His Worldcoin was/is atrocious and I can’t believe he’s gotten a pass for it. His “AI is too dangerous” schtick is getting old. I think you are correct in understanding their pursuit of a new regulatory moat.

PeteD01

  • Handlebar Stache
  • *****
  • Posts: 1395
Re: Bayesian reasoning
« Reply #8 on: March 01, 2024, 10:54:26 AM »
Exaggerated claims of emerging intelligence/consciousness have always been peddled by a lot of people involved with computers and machine learning.
There is a humongous timeline chart in the linked article that illustrates this.
The singularity talk is just the latest incarnation of it.

The products of generative AI are not art and not language - they only appear to be art and language because of the way our brain processes these products that gives the illusion.
So the generative work is mostly performed in human brains which respond to familiar patterns in predictable ways - and that is not exactly a groundbreaking discovery.

The amazing thing about modern AI is not the illusion of intelligence but the ability to analyze and extract patterns from big data and to present them in intelligible form in ordinary language without any understanding.

It is also remarkable that we are coming full circle back to analog computing after almost seven decades:


We’ve been here before: AI promised humanlike machines – in 1958
Published: February 29, 2024
Danielle Williams
Postdoctoral Fellow in Philosophy of Science, Arts & Sciences at Washington University in St. Louis

A roomsize computer equipped with a new type of circuitry, the Perceptron, was introduced to the world in 1958 in a brief news story buried deep in The New York Times. The story cited the U.S. Navy as saying that the Perceptron would lead to machines that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

More than six decades later, similar claims are being made about current artificial intelligence. So, what’s changed in the intervening years? In some ways, not much.

The field of artificial intelligence has been running through a boom-and-bust cycle since its early days. Now, as the field is in yet another boom, many proponents of the technology seem to have forgotten the failures of the past – and the reasons for them. While optimism drives progress, it’s worth paying attention to the history.

The Perceptron, invented by Frank Rosenblatt, arguably laid the foundations for AI. The electronic analog computer was a learning machine designed to predict whether an image belonged in one of two categories. This revolutionary machine was filled with wires that physically connected different components together. Modern day artificial neural networks that underpin familiar AI like ChatGPT and DALL-E are software versions of the Perceptron, except with substantially more layers, nodes and connections.


https://theconversation.com/weve-been-here-before-ai-promised-humanlike-machines-in-1958-222700
« Last Edit: March 01, 2024, 01:53:09 PM by PeteD01 »

PeteD01

  • Handlebar Stache
  • *****
  • Posts: 1395
Re: Bayesian reasoning
« Reply #9 on: March 02, 2024, 08:30:10 AM »
This article has aged well (and just for fun, the article contains everything one needs to know in order to clearly see that tech types discussing uploading minds to the cloud are committing a category mistake - and that is rather embarrassing: the brain is an analog computer, so there is no software nor memory comparable to digital computers; the idea that there is something like a mind separable from the hardware in the brain is nonsensical.):


WHY ALGORITHMS SUCK AND ANALOG COMPUTERS ARE THE FUTURE
Bernd Ulmann | 06.07.2017

Clearly analog computing holds great promise for the future. One of the main problems to tackle will be that programming analog computers differs completely from everything students learn in university today.

In analog computers there are no algorithms, no loops, nothing as they know it. Instead there are a couple of basic, yet powerful computing elements that have to be interconnected cleverly in order to set up an electronic analog of some mathematically described problem.

Technological challenges such as powerful interconnects or highly integrated yet precise computing elements seem minor in comparison to this educational challenge. In the end, changing the way people think about programming will be the biggest hurdle for the future of analog computing.


https://blog.degruyter.com/algorithms-suck-analog-computers-future/
« Last Edit: March 06, 2024, 02:39:30 PM by PeteD01 »

blue_green_sparks

  • Bristles
  • ***
  • Posts: 484
  • FIRE'd 2018
Re: Bayesian reasoning
« Reply #10 on: March 03, 2024, 10:49:28 AM »
Working as an airborne computer designer, my product teams were not in general allowed to implement "self-modifying code" (AI) because it was deemed impossible to "certify" system behavior as 100% deterministic and thus "unsafe". Critical avionics (there are several levels of criticality) system certification requires 100% path code coverage and analysis. Every processor instruction must be exercised and checked. All "dead code" found and removed. We could, however, implement algorithms such as adaptive signal filtering as long as we could prove the system was bounded. Large aircraft have been "self-driving" for decades as you land on a runway you can't even see out of your fogged-in passenger window.

That being said, the air transportation/logistics biz is basically a probability analysis of cost vs safety and often expressed as flight hours per accident.

Compare it to other major forms of transportation – with 0.04 deaths per 100 million miles traveled, train travel is much more dangerous than airplanes’ 0.01 deaths per 100 million miles.

From what I can surmise, Bayesian reasoning can certainly be useful when making flight safety decisions, not unlike drug safety analysis.


PeteD01

  • Handlebar Stache
  • *****
  • Posts: 1395
Re: Bayesian reasoning
« Reply #11 on: March 03, 2024, 11:28:51 AM »
Here is more in depth discussion of neuromorphic computing with emphasis on energy efficiency - of course, energy efficiency cannot be discussed on its own as it is basically a non-negotiable requirement for advanced untethered AI:


Sec. Neuromorphic Engineering
Volume 13 - 2019 | https://doi.org/10.3389/fnins.2019.00666

Making BREAD: Biomimetic Strategies for Artificial Intelligence Now and in the Future

Jeffrey L. Krichmar
William SeveraWilliam Severa
Muhammad S. Khan
James L. Olds

It has been suggested that the brain strives to minimize its free energy by reducing surprise and predicting outcomes (Friston, 2010). Thus, the brain's efficient power consumption may have a basis in thermodynamics and information theory. That is, the system may adapt to resist a natural tendency toward disorder in an ever-changing environment. Top-down signals from downstream areas (e.g., frontal cortex or parietal cortex) can realize predictive coding (Clark, 2013; Sengupta et al., 2013a,b). In this way organisms minimize the long-term average of surprise, which is the inverse of entropy, by predicting future outcomes. In essence, they minimize the expenditures required to deal with unanticipated events. The idea of minimizing free energy has close ties to many existing brain theories, such as the Bayesian brain, predictive coding, cell assemblies, and Infomax, as well as an evolutionary-inspired theory called Neural Darwinism or neuronal group selection (Friston, 2010). For field robotics, a predictive controller could allow the robot to reduce unplanned actions (e.g., obstacle avoidance) and produce more efficient behaviors (e.g., optimal foraging or route planning). For IoT and edge processing, predictions could reduce communication data. Rather than sending redundant predictable information, it would only need to “wake up” and report when something unexpected occurs.

https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2019.00666/full
« Last Edit: March 03, 2024, 01:56:23 PM by PeteD01 »

PeteD01

  • Handlebar Stache
  • *****
  • Posts: 1395
Re: Bayesian reasoning
« Reply #12 on: March 03, 2024, 11:35:47 AM »
...

From what I can surmise, Bayesian reasoning can certainly be useful when making flight safety decisions, not unlike drug safety analysis.

Absolutely, Bayesian reasoning gives predictions with error estimates and that is not any different than how real world (analog) measurements work.
Digital processing gives results of great precision but may actually be inferior in accuracy when compared to Bayesian predictions.
« Last Edit: March 03, 2024, 01:57:37 PM by PeteD01 »