Forums › All General Topics › How to survive in the age of Artificial Intelligence
Tagged: Artificial intelligence
- This topic has 6 replies, 3 voices, and was last updated 2 days ago by Ashish Dalela.
-
AuthorPosts
-
November 19, 2023 at 4:50 am #15984November 19, 2023 at 5:48 am #15985
There are two important things with AI. First, training a machine is very expensive compared to training a human. Second, even after training a machine, it makes a lot of mistakes. I will give some context.
If you read tech news, then Microsoft has been financing OpenAI, and they have been having lots of arguments with OpenAI about the “infrastructure costs”. AI requires millions of servers, switches, and disks to (a) store most of the internet and (b) use the internet data to build a model. This is very expensive.
Due to the cost, Microsoft is arguing that it is better to make domain-specific AI models. It is just like breaking knowledge into mathematics, physics, chemistry, biology, cosmology, economics, sociology, and dozens of departments, and eventually into sub-departments. The general truth requires infinite time and space. But if you try to find a smaller domain-specific model, then it is cheaper, although the model you build by that process (ignoring the general truth) has lots of flaws and limitations. Due to the extremely high cost of AI, the long-term trajectory will be smaller domain-specific models, with many mistakes. Humans will be required to continuously correct these mistakes, by using cross-domain knowledge, just like humans are required in science to correct and improve the theories.
It is important to always remember Godel’s Incompleteness. AI is mathematics, which is always either inconsistent or incomplete. Incomplete means we cannot answer all the questions. But people will want answers to all questions. AI will then make guesses, and give false answers. This is inconsistency. Due to the presence of false answers, some humans will be required to check everything that AI is doing.
I recently asked OpenAI: “Tell me about Ashish Dalela”. There were many correct answers, but there was this wrong answer: “Ashish Dalela lives in the US with his wife and two children”. I live in India, and I have one child. But this is not public information. Based on the public information about books, OpenAI falsely inferred that I live in the US. Probably most Indians in the US have two children, so AI inferred that I must have two children. Basically, AI tries to predict based on the information it has and when the information is false or absent, it makes mistakes. To avoid these mistakes, people will have to get information from AI, and then cross-check each piece of information manually because they cannot trust AI blindly. It means that AI will become a tool that people can use but cannot be sure if that tool works perfectly.
About 3-5 years ago, there was a lot of excitement about self-driven cars. Hardly anyone talks about them anymore. The reason is that they are not able to train a model perfectly even for car driving. Yes, on average, AI can do better than a distracted driver. But a diligent driver can do better than AI. Since lots of accidents happen due to distracted drivers, AI can reduce them. But it will do worse compared to diligent drivers and cause accidents. Overall, we cannot get rid of drivers but we can give them AI as a tool by which the driver will be relieved of many mundane tasks although not totally eliminated.
The situation will be just like pilots sitting in the cockpit of an aircraft although most of the flying is done by autopilot. In case of severe turbulence, the pilot takes control of the plane because he cannot rely on the autopilot to not make any mistakes and the cost of even a small mistake is hundreds of lives.
There is also the issue of assigning responsibility. In case of a road accident, we generally blame the car driver rather than the car manufacturer. But if AI drives a car, and there is an accident, then the blame will go to the car manufacturer. He will be saddled with insurance claims about bad software leading to accidents. There will be many lawsuits about bugs in software leading to loss of lives, whether the manufacturer was negligent in producing the autopilot software, whistleblowers revealing known flaws in public, and so on, such that no car manufacturer will take the risk of saying that drivers are not required. They will say that AI will assist the driver and the driver is ultimately responsible for the outcomes.
The highest level of perfection in AI at present is in language models. I frequently use Grammarly (an AI tool) to correct the English for what I write. Over several years, I have noticed that the tool is about 70% accurate. 20% of the errors it points out are not errors. 10% of genuine errors it never catches. We can call this the false positive and the false negative cases. I don’t use sophisticated English. My English is pretty basic as I am not a native English speaker. And yet, the best AI tool for correcting English doesn’t catch all the errors but it highlights lots of non-errors. Essentially, I cannot rely on AI 100%. I have to diligently read each error and then decide if it is an error and if it is to be corrected. Relative to the time taken to correct errors manually, I still save time. But I cannot completely get rid of manual editing.
To summarize, these are the key problems of AI:
- Godel’s Incompleteness entails that no mathematical model can be consistent and complete. Incompleteness means that many questions cannot be answered. But if we force it to give answers, then it will produce false answers. Since we don’t know if the answer is false, hence, we have to cross-check every answer to determine which ones are true or false.
- Even within the domain of where the mathematical model can predict, the cost of prediction grows exponentially with accuracy and the size of the domain, because we need to provide all the information for the AI model to be accurate, large domains need lots of information, and all the information is literally infinite. Processing this information takes ever-growing space and time (servers, storage, and networking) making it cost-prohibitive.
- Since mistakes will still exist (since we can never provide all the information), therefore, we cannot get rid of humans in cases where the cost of failure is disastrous (such as in car and plane autopilot). The legalities of replacing humans with machines are also cost-prohibitive in case of errors. Hence, we need both humans and machines. In fact, the human must now be trained to use the machine, making it more expensive.
- At every stage of evolution, we have to do a cost-benefit analysis of AI. For example, I don’t use a paid version of Grammarly because it is not very useful for me. I do use the free version of Grammarly because it saves some time. Some people who are being paid well to write will likely use the paid version of Grammarly. For them, it is a helpful tool to assist their writing. Hence, AI is useful to some people and not useful to everyone.
- We are right now in the early stages of AI where AI-generated content is almost nil. Fast forward a few years and the AI-generated content will be huge. Now, AI will have trouble figuring out the difference between fact and fiction (generated by AI). All AI training will then either come to a halt or will dramatically increase in cost as humans have to separate fact from fiction.
In short, think of AI as yet another tool. Cars replaced horse-driven carriages because the car could do everything that the horse-driven carriage could do, at a much lower cost. Calculators replaced manual computation because the calculator could do everything that the manual calculation could do, at a much lower cost. Computers replaced many manual tasks because they could do all that the manual work was doing, at a much lower cost. But AI cannot do all that the human mind does because the human mind is not mathematical. Trying to bring mathematics closer to human mind is cost prohibitive. If we force mathematics on the human mind, then the result will be incompleteness or inconsistency (either it cannot do the task or it makes mistakes doing it).
This doesn’t mean that AI will be totally useless. It will be just like planes being controlled by autopilot without getting rid of the human pilot. Or, it will be like Grammarly correcting English errors without getting rid of the human writer. Overall, it will move humans from doing tasks that are deterministic to those that are not deterministic. For the indeterministic tasks, humans can use AI but not exclusively rely on it.
There are negative effects of AI as well. One, there will be a lot of misinformation generated by AI and people will have a harder time knowing the difference between fact and fiction. Two, the number of people doing low-end jobs will reduce as some of their work will be automated by AI. Three, the economy will undergo a downturn as the unemployment rates will rise. Four, there will be societal discontent as a result of unemployed people trying to survive through crime. Five, preventing this crime will require AI to compete against human criminal ingenuity and the costs of AI will keep rising. Six, we will hit a point at which the costs of AI are greater than the benefits which is when it will be limited.
We have to look at AI through the eyes of an economist. Media propaganda doesn’t do that. Most scientists don’t tell you the problem of Godel’s Incompleteness. They don’t say that mathematics itself cannot do what humans do. The human mind cannot be reduced to a machine. But to the extent that humans have been operating machines (since the dawn of industrialization), AI can automate many of those things and make those people jobless.
What will those people do? The answer is: Whatever they were doing before industrialization! It means that people will slowly exit the industrial economy. A post-industrial parallel economy will appear with declining links between industrial and post-industrial economies. Those made redundant by the industrial economy, but unable to enter the parallel economy, will become criminals, and crime in society will rise, which will further increase the costs of industrialization and reduce its benefits. This is a gradual process and should not be expected to happen overnight. It took time for society to industrialize and it will take time to deindustrialize.
In the meantime, those who are part of the industrial economy should understand that AI is just another tool that will automate low-level automatable jobs but cannot replace humans completely. In the case of computer jobs, for example, AI can do code compilation, fixing syntax errors and API references, and optimal factoring of code into classes and functions. AI can generate test cases for testing the code. But none of this is complete. If we try to make it complete, then it will become inconsistent (i.e., cause lots of false positives and false negatives). Incompleteness means that there will be errors in code even after AI has done all that it could do. Only humans can catch such errors. Incompleteness means that there will be test cases for software that humans can think of but AI cannot generate. Therefore, humans in industrial society will not become totally redundant. But humans will be relieved of simple tasks. Surviving in this industrial society requires moving away from easy tasks to harder tasks. The definition of easy is—whatever the machine can do. The definition of hard is—whatever the machine cannot do. Many people will not be able to make the easy to hard transition due to lack of education and skills. They will live on economic fringes.
AI will be able to answer most of the interview questions being asked currently because they have well-known answers. You just search the internet and get the answer. AI can do that. But AI will not be able to answer customer- or user-specific questions because that information is either not public or cannot be inferred from available information. Surviving in this environment requires focus on unique customer problems, use cases, and envisioning the most optimal design that has considered the tradeoffs between performance, scale, functionality, maintainability, debuggability, and so on. Tradeoffs are not deterministic. Everyone decides to make different tradeoffs or the same tradeoff differently. AI cannot predict what is not deterministically decided.
Remember that a computer cannot deal with abstract concepts. A concept in case of a computer is a set of individual objects and that set (if complete) is infinite. Computers cannot think abstractly. The humans have capacity for abstraction. Therefore, humans will be required to translate abstract concepts into individual cases after which AI will expand them into code. The entry-level jobs will become experienced jobs and experienced jobs will become pure thinking jobs. This will flatten organizational hierarchies as the lower-level jobs have now been automated by AI.
Finally, understand that the seeds of self-destruction are built into every false, wrong, and bad thing. The self-destructive seed grows over time and swallows the ground in which it was growing. False, wrong, and bad are therefore self-negating. Most people linearly extend the past into the future and cannot see this self-negating future.
AI might be such a tipping point that brings the reversal of industrialization, if lot of money is poured into AI to automate jobs and make people jobless. It is too early to say if it is so because AI hasn’t yet been put to actual use in a big way. But if it is put to use in a big way, then most people will be kicked out of the industrial economy, which will cause an industrial economic crash, which will cause the society to deindustrialize, and will reverse the trajectory of industrialization over the last 200-300 years.
As far as survival is concerned, people were surviving on meat and potatoes earlier. There are tribes in the Andaman Islands even today untouched by modern civilization. Nature provides us with our needs but not always with what we want. Humans have a big problem with living on needs. They go on expanding their wants, and in that process they trouble other people, not realizing that we are living in a closed system such that what goes around comes around. The creators of AI will eventually become victims of AI because instead of benefitting people they start harming people.
The economy grows collectively through money circulation. But under capitalism, there is a delusion that some people will grow at the expense of others. When wealth is stolen, then the stolen wealth is spent in fighting those from whom it was stolen. All the wealth is slowly destroyed in this process, and no new wealth is created. New wealth creation begins when people again come together to grow cooperatively. Those who stole the wealth are excluded from this cooperative group and die painfully.
Therefore, if AI is put to serious use, people will be kicked out of the industrial society. The AI producer will become redundant if the people he is trying to make redundant leave the society he controls. He needs a working economy to consume his creations. If consumers disappear from the economy then the whole endeavor self-destructs.
A parasite feeds on the host. The parasite appears to be in control but actually the parasite depends on the host. If the host dies, then the parasite dies. The parasite doesn’t realize that its existence depends on the host. It keeps destroying the host and eventually destroys itself. The escape from this scenario is that the host comes out of the parasite’s control and leaves the parasite to feed on itself to self-destruct.
Therefore, know what you are doing and try to do the least amount of harm while trying to survive. Then the future is secure. Nature will ensure that under the tit-for-tat system you are protected while those who do harm to others are destroyed.
November 19, 2023 at 11:42 am #15986This is a great and satisfying answer.
Most people on the internet who speak about AI are very much confused about its future because they have no knowledge of economics, the dogmatic foundation of science and mathematics. They don’t know that mathematics inherently lacks ‘meaning’, to be consistent and complete. And the weird thing is that they don’t even know how the AI is actually working. For example, in a YouTube video, Emad Mostaque, CEO of Stability AI, says AI is miraculously compressing a very large amount of data and he mentions it is some kind of intelligence but doesn’t know what it is, as he doesn’t know Sankhya philosophy, which says every detailed concept has expanded from an abstract concept.
I want to highlight some points from your above answer:
- AI will be used as a tool by experts but it will not replace them as AI is prone to making mistakes.
- AI will replace low-level/low-skilled jobs and thus flatten the hierarchy in a company.
- The growth of AI is self-destructive because more people will leave the industrial system due to lack of jobs, which eventually leads to the death of the system and AI.
November 20, 2023 at 7:42 am #15989Data can be compressed into a formula. For example, you can stand at a train station and measure the arrival and departure times of trains. Using this data, you can come up with a formula for train arrival and departure. If the trains work deterministically, then the formula is one line long while the train arrival and departure times take up reams of paper. So, the formula compresses data if there is regularity in data.
But what happens if the trains start to arrive and depart a little ahead of schedule or a little behind schedule? Slowly, the size of your formula will start growing. If the train arrival and departure are totally random, then the formula size will equal the size of the data. Essentially, now, you have lost the ability to compress the reams of data into a short formula. Instead, the size of the formula equals the size of the data.
This is a general computational principle that ordered data is highly compressible while disordered data has very low compressibility. If data is totally random, then compressibility is zero. Then the size of the formula equals the size of the data. Basically, there is no incentive in trying to find a formula to compress random data.
AI lies in between total randomness and deterministic order. We can use classical deterministic models for ordered data and we cannot use any model for random data. But if there is some repeatability in data, then we can compress it using an AI model. The problem is that nobody knows how much order exists, and how much we can compress. Therefore, they cannot determine the size of the AI model. As more and more data is incorporated, the formula grows in size. This size growth in AI is described as the expanding number of Neural Network nodes. Present-day Neural Networks are expanding to billions of nodes. Now, a billion (10^9) is still a small number compared to the amount of data being crunched, which is in exabytes (10^18). So, you can say that AI is compressing 10^18 bytes to 10^9 bytes. That is huge.
But all this compression comes with simplification because reality is infinitely unique and when we try to compress it, we have to discard the uniqueness and accept the general case. This general case is true on average but not true if we consider each individual case. E.g., we might say that on average parents have 1.9 children. But in each individual case, there is either 1 child or 2 children. Most parents have two children but some parents have one child. Since we don’t know which is the case for each individual parent, therefore, to be accurate, we have to check each parent individually. But on average, parents with two children are closer to average. So AI-driven compression works for average cases and it doesn’t work for individual cases.
Now suppose that some company wants to sell children’s shoes. They can go to AI and ask: What is the total number of children in a target age group, who can buy my brand of children’s shoes? AI will give a pretty good answer to that question. As a result, the person doing market research on children’s shoes will lose his job because AI is giving as good an answer as the market researcher would have given. This is because we are asking for statistical information and not precise information. But if we ask whether a specific parent will buy children’s shoes, then the answer will be often false.
So it is correct to say that AI compresses data to produce statistical averages. It is also correct to say that those averages are useful when the average case is required. It is also correct to say that if the outlier case is catastrophic, then I cannot rely on the average. I have to check each case, and then the benefits of AI become very low.
A simple example is people looking at MRI or X-ray reports. In the average case, AI can say what the MRI report means in terms of fractures. However, the doctor has to still look at the report because there could be an outlier in the specific patient’s case which the doctor cannot ignore. AI can help the doctor but cannot get rid of the doctor.
The hype about AI does not distinguish between the case where the answer is deterministic and when it it is probabilistic. As the input data gets more varied, the model of AI has to expand from billion to trillion nodes. The cost of AI will increase proportionately, and it is now cost-prohibitive to train AI vs. training a human to do the same job. This limitation of AI is not being discussed at present due to hype.
The virtue of the semantic reality is that it can compress even seemingly random data. For example, the letters in a book appear in a seemingly random order. However the author of the book compresses paragraphs into a section title, section titles into a chapter title, chapter titles into a book title, and many books into a book category. The whole universe can be compressed into a single word with a semantic reality.
Thus, AI compresses repeatable information but not random information. However, semantic intelligence easily compresses meaningful data that seems random to a computer. Since the whole universe is meaningful, therefore, everything is infinitely compressible. The universe springs from a single “seed” and Krishna is the seed-giving father. The seed expands into a massive tree and then collapses back into the seed because it is using semantics to compress and expand infinitely. If we understand this, then the difference between real and artificial intelligence will become clear.
November 20, 2023 at 9:25 am #15990November 26, 2023 at 4:17 am #16008November 26, 2023 at 4:28 am #16016Samu Mini, don’t be so optimistic. Life will go on. The universe will live for trillions of more years. When a society dies, a new one is born on top of it. Nature regenerates life just like Nature generated life at the beginning of the universe. If all life dies on earth, there is life on other planets which immigrates to the earth to generate life again. The soul is eternal. If it can’t be born on this planet, it will be born on other planets.
Apocalyptic thinking is unique to Abrahamic religions. Under it, they crave for an end to the world so that they can go to “judgment day” and quickly to heaven. Apocalyptic thinking is alien to the Vedic system. There is no judgment day. Nobody goes to heaven after an apocalypse. At most, they are reborn based on their previous deeds.
Personally for me, both fear and excitement from AI is the creation of Hollywood movies, blown far out of proportion, and grossly unrealistic. It arises because people don’t know science and technology, they get their “knowledge” from media propaganda, blow it out of proportion based on their ignorance, and then shrug all responsibility for their present life because they imagine that something drastic is going to happen.
My advice to you would be to take responsibility for your actions and try to elevate yourself. Try to understand the science and technology. Don’t be lazy to shrug it off under a presumed apocalypse, because when it doesn’t come per your expectation, you will be lagging behind everyone else who has been working for their uplift. You will then hope for yet another apocalypse because life will become unbearable to you.
-
AuthorPosts
- You must be logged in to reply to this topic.