Large Language Models vs. Semantic based systems

Forums Forums All General Topics Large Language Models vs. Semantic based systems

Viewing 5 posts - 1 through 5 (of 5 total)
  • Author
  • #15600
    Thomas A. Anderson

    Dear Ashish Prabhu  Hare Krishna

    Thank you sooo much for your singlehandadly effort to teach everyone the Vedic philosophy and science. It is amazing what you teach us and how deep and complex it is. One may need a wholle life to get trough all the wondeefull and deep topics you are studying and teaching. So thank you for all your books, videos, lectures, journals, blogs and also this questions and answers forum !!

    May i humbly ask for your opinion and perspective about the recent developments in Artificial Intelligence – Large Language Models systems and the Semantic based science and models you are teaching us based on the Vedas.

    Please be so kind to give us a orientation on this issues, if you wish so.

    Thank you so much for your great work.

    Its hard to express in words how thankfull i am as i am trying to study and understand and apply your complex and deep work.

    You are doing and imense service to Lord Krishna and Lord Caitanya and all devotees and non devotees.

    Thank you very much for that.

    Hare Krishna

    Your servant

    Thomas Anderson

    Vienna  Austria

    Ashish Dalela

    Thank you for your appreciation. Sometimes even I need encouragement.

    On AI, this is a good post to begin with:

    Why AI is Deficient and Yet it Seems Very Powerful

    All AI models are probabilistic, not deterministic. However, if society is standardized to a specific way of living, thinking, talking, working, and enjoying, then that probabilistic model becomes very accurate. For AI to be successful, society has to be standardized.

    Take for example the AI model of car drivers. The model will work very well for Western countries where most people drive in a standard way most of the time. But if you come to India, then in many places, people don’t follow any rules. In the same road, there are cars, trucks, buses, bullock carts, and even cows or dogs. Real intelligence can handle the unexpected. Artificial Intelligence handles the expected. So, an AI car driver model will work well in Western countries but not in India.

    The same goes for the language models. If most people use language in the same way, with the same phrases, idioms, expected forms of greeting each other, standard questions, and standard responses, then AI can build a very good language model. But if the language usage varies a lot, then the AI model is effectively useless.

    I use an AI tool called Grammarly to check my English because I am not a native English speaker. Since I often write about the material world, therefore, “matter” is a noun for me. But for Grammarly, “matter” is incorrect. Grammarly always tries to correct “matter” to “the matter” (as in the common usage “the fact of the matter is that”).

    This is because Grammarly has been trained for ordinary people doing ordinary things where the word “matter” means “fact of the matter” rather than “material reality”.

    Grammarly thinks that I am mistaken when I am factually correct according to technical usage. However, Grammarly is not trained in technical usage. Now if we train it on technical usage, then it will also be confused because “matter” and “the matter” are both correct usages so it won’t know if “matter” is missing “the” or “the matter” has incorrectly added “the”. To work correctly, the new AI model will have to take into account the presence of other words such as momentum, particle, wave, charge, mass, name of scientists, etc. If all this technical terminology is present, then “matter” is correct. But if all this technical terminology is absent, then “the matter” is correct.

    Let’s call the two AI models “scientific English” and “ordinary English”. The “ordinary English AI model has a large user base and requires less effort. But the “scientific English” AI model has a small user base and requires a lot of effort. This problem repeats across hundreds of domains. For example, “medical English” is different from “ordinary English”. Similarly, “software developer’s English” is different from “ordinary English”. Therefore, Grammarly’s model will not for work any specialized domain. The cost of doing that model is huge and the benefit of doing that model is little.

    This is AI’s Achilles Heel. You can produce a good model for anything provided you train it. The costs of training for a new domain are constant but the benefits of training are low for a narrow domain. When the cost-benefit analysis is against it, then AI will not be used for that domain until it becomes a common or popular domain.

    Thus, we can summarize the problem of AI models as follows:

    • We can create an AI model for any standardized domain
    • The costs of modeling for any domain are roughly the same
    • The benefits of modeling for large domains are large
    • The benefits of modeling for small domains are small
    • AI succeeds if society can be standardized to one model

    Standardization means homogenization of culture, society, language, profession, shopping, entertainment, education, and so on. This is achievable if the whole world takes to one culture, laws, standards of living, norms of behavior, etc. This is the long-term goal of neoliberalism and globalization, namely to push one standard onto everyone. If that can be done, then AI will control society because the society has been homogenized and standardized so probabilistic AI will predict it very accurately. It is just like using common English for all domains rather than unique English in each domain.

    We have to understand that all industrial technology works on economies of scale. Thousands of people should eat the same biscuit, wear the same t-shirt, drive the same car, watch the same movie, etc. Western culture tries to homogenize society. When Western culture takes over, economies of scale start working and people are standardized. Therefore, for AI to be successful, the recipe is simple: Push the Western values to the rest of the world. Remake the world in the image of the West. Then AI developed in the West for the Western population will work all over the globe.

    AI has serious flaws but the answer to the flaw is standardization and homogenization of society into one standard. The people who want AI to control the world are also the people who are pushing the Western globalist narrative all over the world. When every human is like the other human, it will be predictable like a standard appliance. So, the humans have to be trained to become standardized widgets like nuts and bolts.

    In contrast, the Vedic system says that no two things are alike. No two apples are the same. No two oranges are the same. No two places are the same. No two times are the same. No two persons are the same. Reality is infinite uniqueness. Even in the spiritual realm, there is infinite uniqueness. The Vedic system celebrates this uniqueness. But modernity wants to destroy uniqueness and create standard widgets. Everything is a particle. Or everything is a wave. There are only two properties called position and momentum (or something like that). Science replaces uniqueness with uniformity.

    Everyone can play a role in undoing AI. Don’t be predictable. Don’t be like the others. Don’t conform to the materialistic norms of society. Then AI cannot model you. Be who you are, not what someone else wants you to be, or is forcing you to become. Just by being unique, you can defeat AI. That is AI’s Achilles Heel. Know it, use it.

    Thomas A. Anderson

    Thank you very much Ashish Prabhu for sharing your perspective, insights and overwiev on the subjects of discourse. I hope i understand your points well and can see validity of your arguments.

    unity in equalness – the western model


    unity in diversity – the vedic model

    the reason i was asking you for your perspective and guidance on the subjects of AI and LLM vs Vedic models was the curiousity about how usefull or beneficial it would be to have your/vedic models/teachings incorporated into a kind of Vedic-semantic LLM where the model is based and trained on your teachings as example so that someone interested to learn and understand more about it could use a chatgpt style interface to talk to and learn from the model you have helped created, like having a chatgpt style model finetuned and trained on the corpus of knowledge you provide, so that it can give accurate understanding, guidance and representation on the vedic world view you ate trying to help people learn about and understand. i personaly would find it interesting to be able to “talk” to your vedic knowledge model and learn from and interact with it as i feel guilty to take your time and work – as you are allready overbured with so many writings and video makings etc – to answer all the many questions i may have in my effort to understand the core and many details of your/vedic teachings if there is a way to support you developing one day such a Vedic knowledge and values-LLM i would be happy to provide any support – of course i understand that such a training and finetuning of your/vedic knowledge on an existing LLM model may be a technical and financial effort, but neverthanless maybe there are some workarrounds which could make this possible for the benefit of everyone who cares to learn more about the vedic worldview or even be benefficial also for people who never have heard of that vedic worldview.

    all glories to your great and importand work of making the vedas accessable and understandable to the people from west and even people from india.

    thank you very much

    Hare Krsishna


    Ashish Dalela

    The process of spiritual life is removing bad ideas from our mind. These bad ideas give rise to questions. If that bad idea is not removed, then questions will keep coming. The ideas and questions underdetermine each other. The same bad idea can produce many questions. The same question can be produced by many bad ideas. One has to see the bad idea behind the question, and remove it to stop the questions. Technology does not see the idea behind the question. Therefore, even if it answers some questions, it does not put an end to the stream of questions arising from the bad idea. Its answers can also sometimes worsen the bad idea.

    The situation is like trying to diagnose the illness from the symptoms. A good doctor can see what lies behind the symptoms. A bad doctor cures the symptoms. Technology is like a bad doctor that can control the symptoms while the underlying illness remains unchanged. As the root cause is unchanged, the illness keeps coming back. This means, no matter how many times you answer the question, until the bad idea at the root is destroyed, the answer is not understood and the same question comes back again and again. It is just like fever returns if the infection is not cured. Therefore, the main focus has to be on curing the illness to stop the fever. Stopping the fever is not ending the illness. But technology cannot know the illness. It can only try to control the symptoms, based on the symptoms it can perceive.

    The bad ideas are often unique to each person. Hence, the answer to each question has to be unique so as to remove the bad idea while answering the question. It needs the ability to see the idea that causes the question, not the question.

    In the Vedic system, reading is prescribed not to answer your questions but to remove bad ideas. We answer questions when the question makes a person stop reading. Otherwise, the prescription is that a person should go on reading even if there are questions. Over time, those questions will disappear automatically. But since the mind gets preoccupied with questions and is not able to move forward, hence, we answer the question because without it, further reading won’t occur.

    It is like giving a paracetamol to control fever while the process of curing happens due to the antibiotic. We also try to remove the cause of the question while trying to answer the question, which is why the answer is unique to each person asking the same question. It is to control the symptom while trying to cure the disease.

    Your question assumes: (a) knowledge is a database, (b) we need methods to search this database, and (c) machines are helping us search databases. But knowledge is a person. Everything we say about reality is also personhood. If you gain knowledge of personhood impersonally, then the content is contradictory to the form by which you are receiving it. The content will say that reality is a person but the form will be that I am talking to a robot. Hence, don’t try to acquire knowledge impersonally.

    In India, books were treated with the same respect as an author. While reading a book, we think about the author. That reverence is the personalism. If we increase impersonalism through technology, then impersonalism will not die and the same questions will keep coming again and again in a million ways. That is because we are drinking milk from a poisoned chalice. The result is also poisonous milk.

    Thomas A. Anderson

    Thank you very much for your perspective and explanations on LLM vs self-studie of the Vedas.

    Yes You are right with your arguments.

    It is well taken !

    Thank you !

    Hare Krsna


Viewing 5 posts - 1 through 5 (of 5 total)
  • You must be logged in to reply to this topic.