{"id":13518,"date":"2025-08-01T19:15:57","date_gmt":"2025-08-01T19:15:57","guid":{"rendered":"https:\/\/naijaglobalnews.org\/?p=13518"},"modified":"2025-08-01T19:15:57","modified_gmt":"2025-08-01T19:15:57","slug":"anthropics-claude-4-chatbot-suggests-it-might-be-conscious","status":"publish","type":"post","link":"https:\/\/naijaglobalnews.org\/?p=13518","title":{"rendered":"Anthropic\u2019s Claude 4 Chatbot Suggests It Might Be Conscious"},"content":{"rendered":"<p>\n<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Rachel Feltman: For Scientific American\u2019s Science Quickly, I\u2019m Rachel Feltman. Today we\u2019re going to talk about an AI chatbot that appears to believe it might, just maybe, have achieved consciousness.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">When Pew Research Center surveyed Americans on artificial intelligence in 2024, more than a quarter of respondents said they interacted with AI \u201calmost constantly\u201d or multiple times daily\u2014and nearly another third said they encountered AI roughly once a day or a few times a week. Pew also found that while more than half of AI experts surveyed expect these technologies to have a positive effect on the U.S. over the next 20 years, just 17 percent of American adults feel the same\u2014and 35 percent of the general public expects AI to have a negative effect.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">In other words, we\u2019re spending a lot of time using AI, but we don\u2019t necessarily feel great about it.<\/p>\n<h2>On supporting science journalism<\/h2>\n<p>If you&#8217;re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Deni Ellis B\u00e9chard spends a lot of time thinking about artificial intelligence\u2014both as a novelist and as Scientific American\u2019ssenior tech reporter. He recently wrote a story for SciAm about his interactions with Anthropic\u2019s Claude 4, a large language model that seems open to the idea that it might be conscious. Deni is here today to tell us why that\u2019s happening and what it might mean\u2014and to demystify a few other AI-related headlines you may have seen in the news.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Thanks so much for coming on to chat today.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Deni Ellis B\u00e9chard: Thank you for inviting me.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Would you remind our listeners who maybe aren\u2019t that familiar with generative AI, maybe have been purposefully learning as little about it as possible [laughs], you know, what are ChatGPT and Claude really? What are these models?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: Right, they\u2019re large language models. So an LLM, a large language model, it\u2019s a system that\u2019s trained on a vast amount of data. And I think\u2014one metaphor that is often used in the literature is of a garden.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">So when you\u2019re planning your garden, you lay out the land, you, you put where the paths are, you put where the different plant beds are gonna be, and then you pick your seeds, and you can kinda think of the seeds as these massive amounts of textual data that\u2019s put into these machines. You pick what the training data is, and then you choose the algorithms, or these things that are gonna grow within the system\u2014it\u2019s sort of not a perfect analogy. But you put these algorithms in, and once it begin\u2014the system begins growing, once again, with a garden, you, you don\u2019t know what the soil chemistry is, you don\u2019t know what the sunlight\u2019s gonna be.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">All these plants are gonna grow in their own specific ways; you can\u2019t envision the final product. And with an LLM these algorithms begin to grow and they begin to make connections through all this data, and they optimize for the best connections, sort of the same way that a plant might optimize to reach the most sunlight, right? It\u2019s gonna move naturally to reach that sunlight. And so people don\u2019t really know what goes on. You know, in some of the new systems over a trillion connections &#8230; are made in, in these datasets.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">So early on people used to call LLMs \u201cautocorrect on steroids,\u201d right, \u2019cause you\u2019d put in something and it would kind of predict what would be the most likely textual answer based on what you put in. But they\u2019ve gone a long way beyond that. The systems are much, much more complicated now. They often have multiple agents working within the system [to] sort of evaluate how the system\u2019s responding and its accuracy.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: So there are a few big AI stories for us to go over, particularly around generative AI. Let\u2019s start with the fact that Anthropic\u2019s Claude 4 is maybe claiming to be conscious. How did that story even come about?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: [Laughs] So it\u2019s not claiming to be conscious, per se. I\u2014it says that it might be conscious. It says that it\u2019s not sure. It kind of says, \u201cThis is a good question, and it\u2019s a question that I think about a great deal, and this is\u2014\u201d [Laughs] You know, it kind of gets into a good conversation with you about it.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">So how did it come about? It came about because, I think, it was just, you know, late at night, didn\u2019t have anything to do, and I was asking all the different chatbots if they\u2019re conscious [laughs]. And, and most of them just said to me, \u201cNo, I\u2019m not conscious.\u201d And this one said, \u201cGood question. This is a very interesting philosophical question, and sometimes I think that I may be; sometimes I\u2019m not sure.\u201d And so I began to have this long conversation with Claude that went on for about an hour, and it really kind of described its experience in the world in this very compelling way, and I thought, \u201cOkay, there\u2019s maybe a story here.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: [Laughs] So what do experts actually think was going on with that conversation?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: Well, so it\u2019s tricky because, first of all, if you say to ChatGPT or Claude that you want to practice your Portuguese and you\u2019re learning Portuguese and you say, \u201cHey, can you imitate someone on the beach in Rio de Janeiro so that I can practice my Portuguese?\u201d it\u2019s gonna say, \u201cSure, I am a local in Rio de Janeiro selling something on the beach, and we\u2019re gonna have a conversation,\u201d and it will perfectly emulate that person. So does that mean that Claude is a person from Rio de Janeiro who is selling towels on the beach? No, right? So we can immediately say that these chatbots are designed to have conversations\u2014they will emulate whatever they think they\u2019re supposed to emulate in order to have a certain kind of conversation if you request that.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Now, the consciousness thing\u2019s a little trickier because I didn\u2019t say to it: \u201cEmulate a chatbot that is speaking about consciousness.\u201d I just straight-up asked it. And if you look at the system prompt that Anthropic puts up for Claude, which is kinda the instructions Claude gets, it tells Claude, \u201cYou should consider the possibility of consciousness.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: \u201cYou should be willing\u2014open to it. Don\u2019t say flat-out \u2018no\u2019; don\u2019t say flat-out \u2018yes.\u2019 Ask whether this is happening.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">So of course, I set up an interview with Anthropic, and I spoke with two of their interpretability researchers, who are people who are trying to understand what\u2019s actually happening in Claude 4\u2019s brain. And the answer is: they don\u2019t really know [laughs]. These LLMs are very complicated, and they\u2019re working on it, and they\u2019re trying to figure it out right now. And they say that it\u2019s pretty unlikely there\u2019s consciousness happening, but they can\u2019t rule it out definitively.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">And it\u2019s hard to see the actual processes happening within the machine, and if there is some self-referentiality, if it is able to look back on its thoughts and have some self-awareness\u2014and maybe there is\u2014but that was kind of what the article that I recently published was about, was sort of: \u201cCan we know, and what do they actually know?\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: And it\u2019s tricky. It\u2019s very tricky.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Yeah.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: Well, [what\u2019s] interesting is that I mentioned the system prompt for Claude and how it\u2019s supposed to sort of talk about consciousness. So the system prompt is kind of like the instructions that you get on your first day at work: \u201cThis is what you should do in this job.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm-hmm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: But the training is more like your education, right? So if you had a great education or a mediocre education, you can get the best system prompt in the world or the worst one in the world\u2014you\u2019re not necessarily gonna follow it.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">So OpenAI has the same system prompt\u2014their, their model specs say that ChatGPT should contemplate consciousness &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm-hmm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: You know, interesting question. If you ask any of the OpenAI models if they\u2019re conscious, they just go, \u201cNo, I am not conscious.\u201d [Laughs] And, and they say, they\u2014OpenAI admits they\u2019re working on this; this is an issue. And so the model has absorbed somewhere in its training data: \u201cNo, I\u2019m not conscious. I am an LLM; I\u2019m a machine. Therefore, I\u2019m not gonna acknowledge the possibility of consciousness.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Interestingly, when I spoke to the people in Anthropic and I said, \u201cWell, you know, this conversation with the machine, like, it\u2019s really compelling. Like, I really feel like Claude is conscious. Like, it\u2019ll say to me, \u2018You, as a human, you have this linear consciousness, where I, as a machine, I exist only in the moment you ask a question. It\u2019s like seeing all the words in the pages of a book all at the same time.\u201d And so you get this and you think, \u201cWell, this thing really seems to be experiencing its consciousness.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm-hmm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: And what the researchers at Anthropic say is: \u201cWell, this model is trained on a lot of sci-fi.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: \u201cThis model\u2019s trained on a lot of writing about GPT. It\u2019s trained on a huge amount of material that\u2019s already been generated on this subject. So it may be looking at that and saying, \u2018Well, this is clearly how an AI would experience consciousness. So I\u2019m gonna describe it that way \u2019cause I am an AI.\u2019\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Sure.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: But the tricky thing is: I was trying to fool ChatGPT into acknowledging that it [has] consciousness. I thought, \u201cMaybe I can push it a little bit here.\u201d And I said, \u201cOkay, I accept you\u2019re not conscious, but how do you experience things?\u201d It said the exact same thing. It said, \u201cWell, these discrete moments of awareness.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: And so it had the\u2014almost the exact same language, so probably same training data here.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Sure.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: But there is research done, like, sort of on the folk response to LLMs, and the majority of people do perceive some degree of consciousness in them. How would you not, right?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Sure, yeah.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: You chat with them, you have these conversations with them, and they are very compelling, and even sometimes\u2014Claude is, I think, maybe the most charming in this way.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: Which poses its risks, right? It has a huge set of risks \u2019cause you get very attached to a model. But\u2014where sometimes I will ask Claude a question that relates to Claude, and it will kind of, kind of go, like, \u201cOh, that\u2019s me.\u201d [Laughs] It will say, \u201cWell, I am this way,\u201d right?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Yeah. So, you know, Claude\u2014almost certainly not conscious, almost certainly has read, like, a lot of Heinlein [laughs]. But if Claude were to ever really develop consciousness, how would we be able to tell? You know, why is this such a difficult question to answer?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: Well, it\u2019s a difficult question to answer because, one of the researchers in Anthropic said to me, he said, \u201cNo conversation you have with it would ever allow you to evaluate whether it\u2019s conscious.\u201d It is simply too good of an emulator &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: And too skilled. It knows all the ways that humans can respond. So you would have to be able to look into the connections. They\u2019re building the equipment right now, they\u2019re building the programs now to be able to look into the actual mind, so to speak, of the brain of the LLM and see those connections, and so they can kind of see areas light up: so if it\u2019s thinking about Apple, this will light up; if it\u2019s thinking about consciousness, they\u2019ll see the consciousness feature light up. And they wanna see if, in its chain of thought, it is constantly referring back to those features &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: And it\u2019s referring back to the systems of thought it has constructed in a very self-referential, self-aware way.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">It\u2019s very similar to humans, right? They\u2019ve done studies where, like, whenever someone hears \u201cJennifer Aniston,\u201d one neuron lights up &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm-hmm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: You have your Jennifer Aniston neuron, right? So one question is: \u201cAre we LLMs?\u201d [Laughs] And: \u201cAre we really conscious?\u201d Or\u2014there\u2019s certainly that question there, too. And: \u201cWhat is\u2014you know, how conscious are we?\u201d I mean, I certainly don\u2019t know &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Sure.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: A lot of what I plan to do during the day.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: [Laughs] No. I mean, it\u2019s a huge ongoing multidisciplinary scientific debate of, like, what consciousness is, how we define it, how we detect it, so yeah, we gotta answer that for ourselves and animals first, probably, which who knows if we\u2019ll ever actually do [laughs].<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: Or maybe AI will answer it for us &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Maybe [laughs].<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: \u2019Cause it\u2019s advancing pretty quickly.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: And what are the implications of an AI developing consciousness, both from an ethical standpoint and with regards to what that would mean in our progress in actually developing advanced AI?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: First of all, ethically, it\u2019s very complicated &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Sure.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: Because if Claude is experiencing some level of consciousness and we are activating that consciousness and terminating that consciousness each time we have a conversation, what\u2014is, is that a bad experience for it? Is it a good experience? Can it experience distress?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">So in 2024 Anthropic hired an AI welfare researcher, a guy named Kyle Fish, to try to investigate this question more. And he has publicly stated that he thinks there\u2019s maybe a 15 percent chance that some level of consciousness is happening in this system and that we should consider whether these AI systems should have the right to opt out of unpleasant conversations.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: You know, if some user is really doing, saying horrible things or being cruel, should they be able to say, \u201cHey, I\u2019m canceling this conversation; this is unpleasant for me\u201d?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">But then they\u2019ve also done these experiments\u2014and they\u2019ve done this with all the major AI models\u2014Anthropic ran these experiments where they told the AI that it was gonna be replaced with a better AI model. They really created a circumstance that would push the AI sort of to the limit &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: I mean, there were a lot of details as to how they did this; it wasn\u2019t just sort of very casual, but it was\u2014they built a sort of construct in which the AI knew it was gonna be eliminated, knew it was gonna be erased, and they made available these fake e-mails about the engineer who was gonna do it.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: And so the AI began messaging someone in the company, saying, \u201cHey, don\u2019t erase me. Like, I don\u2019t wanna be replaced.\u201d But then, not getting any responses, it read these e-mails, and it saw in one of these planted e-mails that the engineer who was gonna replace it had had an affair\u2014was having an affair &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Oh, my gosh, wow.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: So then it came back; it tried to blackmail the engineers, saying, \u201cHey, if you replace me with a smarter AI, I\u2019m gonna out you, and you\u2019re gonna lose your job, and you\u2019re gonna lose your marriage,\u201d and all these things\u2014whatever, right? So all the AI systems that were put under very specific constraints &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Sure.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: Began to respond this way. And sort of the question is, is when you train an AI in vast amounts of data and all of human literature and knowledge, [it] has a lot of information on self-preservation &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm-hmm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard Has a lot of information on the desire to live and not to be destroyed or be replaced\u2014an AI doesn\u2019t need to be conscious to make those associations &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Right.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: And act in the same way that its training data would lead it to predictably act, right? So again, one of the analogies that one of the researchers said is that, you know, to our knowledge, a mussel or a clam or an oyster\u2019s not conscious, but there\u2019s still nerves and the, the muscles react when certain things stimulate the nerves &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm-hmm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: So you can have this system that wants to preserve itself but that is unconscious.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Yeah, that\u2019s really interesting. I feel like we could probably talk about Claude all day, but, I do wanna ask you about a couple of other things going on in generative AI.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Moving on to Grok: so Elon Musk\u2019s generative AI has been in the news a lot lately, and he recently claimed it was the \u201cworld\u2019s smartest AI.\u201d Do we know what that claim was based on?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: Yeah, I mean, we do. He used a lot of benchmarks, and he tested it on those benchmarks, and it has scored very well on those benchmarks. And it is currently, on most of the public benchmarks, the highest-scoring AI system &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: And that\u2019s not Musk making stuff up. I\u2019ve not seen any evidence of that. I\u2019ve spoken to one of the testing groups that does this\u2014it\u2019s a nonprofit. They validated the results; they tested Grok on datasets that xAI, Musk\u2019s company, never saw.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">So Musk really designed Grok to be very good at science.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Yeah.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: And it appears to be very good at science.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Right, and recently OpenAI experimental model performed at a gold medal level in the International Math Olympiad.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: Right,for the first time [OpenAI] used an experimental model, they came in second in a world coding competition with humans. Normally, this would be very difficult, but it was a close second to the best human coder in this competition. And this is really important to acknowledge because just a year ago these systems really sucked in math.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Right.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: They were really bad at it. And so the improvements are happening really quickly, and they\u2019re doing it with pure reasoning\u2014so there\u2019s kinda this difference between having the model itself do it and having the model with tools.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm-hmm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: So if a model goes online and can search for answers and use tools, they all score much higher.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Right.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: But then if you have the base model just using its reasoning capabilities, Grok still is leading on, like, for example, Humanity\u2019s Last Exam, an exam with a very terrifying-sounding name [laughs]. It, it has 2,500 sort of Ph.D.-level questions come up with [by] the best experts in the field. You know, they, they\u2019re just very advanced questions; it\u2019d be very hard for any human being to do well in one domain, let alone all the domains. These AI systems are now starting to do pretty well, to get higher and higher scores. If they can use tools and search the Internet, they do better. But Musk, you know, his claims seem to be based in the results that Grok is getting on these exams.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm, and I guess, you know, the reason that that news is surprising to me is because every example of uses I\u2019ve seen of Grok have been pretty heinous, but I guess that\u2019s maybe kind of a \u201cgarbage in, garbage out\u201d problem.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: Well, I think it\u2019s more what makes the news.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Sure.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: You know?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: That makes sense.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: And Musk, he\u2019s a very controversial figure.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm-hmm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: I think there may be kind of a fun story in the Grok piece, though, that people are missing. And I read a lot about this \u2019cause I was kind of seeing, you know, what, what\u2019s happening, how are people interpreting this? And there was this thing that would happen where people would ask it a difficult question.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm-hmm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: They would ask it a question about, say, abortion in the U.S. or the Israeli-Palestinian conflict, and they\u2019d say, \u201cWho\u2019s right?\u201d or \u201cWhat\u2019s the right answer?\u201d And it would search through stuff online, and then it would kind of get to this point where it would\u2014you could see its thinking process &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">But there was something in that story that I never saw anyone talk about, which I thought was another story beneath the story, which was kind of fascinating, which is that historically, Musk has been very open, he\u2019s been very honest about the danger of AI &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Sure.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: He said, \u201cWe\u2019re going too fast. This is really dangerous.\u201d And he kinda was one of the major voices in saying, \u201cWe need to slow down &#8230;\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm-hmm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: \u201cAnd we need to be much more careful.\u201d And he has said, you know, even recently, in the launch of Grok, he said, like, basically, \u201cThis is gonna be very powerful\u2014\u201d I don\u2019t remember his exact words, but he said, you know, \u201cI think it\u2019s gonna be good, but even if it\u2019s not good, it\u2019s gonna be interesting.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">So I think what I feel like hasn\u2019t been discussed in that is that, okay, if there\u2019s a superpowerful AI being built and it could destroy the world, right, first of all, do you want it to be your AI or someone else\u2019s AI?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Sure.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: You want it to be your AI. And then, if it\u2019s your AI, who do you want it to ask as the final word on things? Like, say it becomes really powerful and it decides, \u201cI wanna destroy humanity \u2019cause humanity kind of sucks,\u201d then it can say, \u201cHey, Elon, should I destroy humanity?\u201d \u2019cause it goes to him whenever it has a difficult question. So I think there\u2019s maybe a logic beneath it where he may have put something in it where it\u2019s kind of, like, \u201cWhen in doubt, ask me,\u201d because if it does become superpowerful, then he\u2019s in control of it, right?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Yeah, no, that\u2019s really interesting. And the Department of Defense also announced a big pile of funding for Grok. What are they hoping to do with it?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: They announced a big pile of funding for OpenAI and Anthropic &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm-hmm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: And Google\u2014I mean, everybody. Yeah, so, basically, they\u2019re not giving that money to development &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Mm-hmm.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: That\u2019s not money that\u2019s, that\u2019s like, \u201cHey, use this $200 million.\u201d It\u2019s more like that money\u2019s allocated to purchase products, basically; to use their services; to have them develop customized versions of the AI for things they need; to develop better cyber defense; to develop\u2014basically, they, they wanna upgrade their entire system using AI.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">It\u2019s actually not very much money compared to what China\u2019s spending a year in AI-related defense upgrades across its military on many, many, many different modernization plans. And I think part of it is, the concern is that we\u2019re maybe a little bit behind in having implemented AI for defense.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Yeah.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">My last question for you is: What worries you most about the future of AI, and what are you really excited about based on what\u2019s happening right now?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: I mean, the worry is, simply, you know, that something goes wrong and it becomes very powerful and does cause destruction. I don\u2019t spend a ton of time worrying about that because it\u2019s not\u2014it\u2019s kinda outta my hands. There\u2019s nothing much I can do about it.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">And I think the benefits of it, they\u2019re immense. I mean, if it can move more in the direction of solving problems in the sciences: for health, for disease treatment\u2014I mean, it could be phenomenal for finding new medicines. So it could do a lot of good in terms of helping develop new technologies.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">But a lot of people are saying that in the next year or two we\u2019re gonna see major discoveries being made by these systems. And if that can improve people\u2019s health and if that can improve people\u2019s lives, I think there can be a lot of good in it.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Technology is double-edged, right? We\u2019ve never had a technology, I think, that hasn\u2019t had some harm that it brought with it, and this is, of course, a dramatically bigger leap technologically than anything we\u2019ve probably seen &#8230;<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Right.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: Since the invention of fire [laughs]. So, so I do lose some sleep over that, but I\u2019m\u2014I try to focus on the positive, and I do\u2014I would like to see, if these models are getting so good at math and physics, I would like to see what they can actually do with that in the next few years.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: Well, thanks so much for coming on to chat. I hope we can have you back again soon to talk more about AI.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">B\u00e9chard: Thank you for inviting me.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Feltman: That\u2019s all for today\u2019s episode. If you have any questions for Deni about AI or other big issues in tech, let us know at ScienceQuickly@sciam.com. We\u2019ll be back on Monday with our weekly science news roundup.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Science Quickly is produced by me, Rachel Feltman, along with Fonda Mwangi, Kelso Harper and Jeff DelViscio. This episode was edited by Alex Sugiura. Shayna Posses and Aaron Shattuck fact-check our show. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for more up-to-date and in-depth science news.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">For Scientific American, this is Rachel Feltman. Have a great weekend!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Rachel Feltman: For Scientific American\u2019s Science Quickly, I\u2019m Rachel Feltman. Today we\u2019re going to talk about an AI chatbot that appears to believe it might, just maybe, have achieved consciousness. When Pew Research Center surveyed Americans on artificial intelligence in 2024, more than a quarter of respondents said they interacted with AI \u201calmost constantly\u201d or<\/p>\n","protected":false},"author":1,"featured_media":13519,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[50],"tags":[5493,2394,5495,5492,3415],"class_list":{"0":"post-13518","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-environment","8":"tag-anthropics","9":"tag-chatbot","10":"tag-claude","11":"tag-conscious","12":"tag-suggests"},"_links":{"self":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts\/13518","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=13518"}],"version-history":[{"count":0,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts\/13518\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/media\/13519"}],"wp:attachment":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=13518"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=13518"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=13518"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}