Google Reveals Gemini, Its Much-Anticipated Large Language Model

Google’s Gemini is available to consumers in Bard or Pixel 8 Pro now, with an enterprise model coming Dec. 13. Get more details about the LLM.

Gemini is available to consumers in Bard or Pixel 8 Pro now, with an enterprise model coming Dec. 13.

Google has revealed Gemini, its long-rumored large language model and rival to GPT-4. Global users of Google Bard and the Pixel 8 Pro will be able to run Gemini starting now; an enterprise product, Gemini Pro, is coming on Dec. 13. Developers can sign up now for an early preview in Android AICore.

Jump to:

What is Gemini?

Gemini is a large language model that runs generative artificial intelligence applications; it can summarize text, create images and answer questions. Gemini was trained on Google’s Tensor Processing Units v4 and v5e.

Google’s Bard is a generative AI based on the PaLM large language mode. Starting today, Gemini will be used to give Bard “more advanced reasoning, planning, understanding and more,” according to a Google press release.

SEE: Microsoft invested $3.2 billion on AI in the UK. (TechRepublic) 

Gemini size options

Gemini comes in three model sizes: Ultra, Pro and Nano. Ultra is the most capable, Nano is the smallest and most efficient, and Pro sits in the middle for general tasks. The Nano version is what Google is using on the Pixel, while Bard gets Pro. Google says it plans to run “extensive trust and safety checks” before releasing Gemini Ultra to select groups.

Gemini for coding

Gemini can code in Python, Java, C++, Go and other popular programming languages. Google used Gemini to upgrade Google’s AI-powered code generation system, AlphaCode.

Gemini will be added to more Google products

Next, Google plans to bring Gemini to Ads, Chrome and Duet AI. In the future, Gemini will be used in Google Search as well.

Competitors to Gemini

Gemini and the products built with it, such as chatbots, will compete with OpenAI’s GPT-4, Microsoft’s Copilot (which is based on OpenAI’s GPT-4), Anthropic’s Claude AI, Meta’s Llama 2 and more. Google claims Gemini Ultra outperforms GPT-4 in several benchmarks, including the massive multitask language understanding general knowledge test and in Python code generation.

Does Gemini have an enterprise product?

Starting Dec. 13, enterprise customers and developers will be able to access Gemini Pro through the Gemini API in Google’s Vertex AI or Google AI Studio.

Google expects Gemini Nano to be generally available for developers and enterprise customers in early 2024. Android developers can use this LLM to build Gemini apps on-device through AndroidAICore.

Possible enterprise use cases for Gemini

More must-read AI coverage

Of particular interest to enterprise use cases might be Gemini’s ability to “understand and reason about users’ intent,” said Palash Nandy, engineering director at Google, in a demonstration video. Gemini generates a bespoke UI depending on whether the user is looking for images or text. In the same UI, Gemini will flag areas in which it doesn’t have enough information and ask for clarification. Through the bespoke UI, the user can explore other options with increasing detail.

Gemini has been trained on multimodal content from the very beginning instead of starting with text and expanding to audio, images and video later, letting Gemini parse written or visual information with equal acuity. One example of how this might be useful for business Google provides is the prompt “Could Gemini help make a demo based on this video?” in which the AI translates video content to an original animation.

Gemini’s timing compared to other popular LLMs

Gemini has been hotly rumored, as Google tries to compete with OpenAI. The New York Times reported Google executives were “shaken” by OpenAI’s tech in January 2023. More recently, Google supposedly struggled with releasing Gemini in languages other than English, leading to a delay of an in-person launch event.

However, releasing Google’s own large language model after ChatGPT has received gradual GPT-4 powered updates for nearly a year means Google has the advantage of leapfrogging the last year of AI development. For example, Gemini is multimodal (i.e., able to work with text, video, speech and code) and lives natively on the Google Pixel 8. Users can access Gemini on their Google Pixel 8 without an internet connection, unlike ChatGPT, which started out in a browser.

Read the full article at: www.techrepublic.com

Is AI Mimicking Consciousness or Truly Becoming Aware Gradually?

 

AI’s remarkable abilities, like those seen in ChatGPT, often seem conscious due to their human-like interactions.

 

The question is whether the language model also perceives our text when we prompt it. Or is it just a zombie, working based on clever pattern-matching algorithms? Based on the text it generates, it is easy to be swayed that the system might be conscious. However, in this new research, Jaan Aru, Matthew Larkum and Mac Shine take a neuroscientific angle to answer this question.

 

All three being neuroscientists, these authors argue that although the responses of systems like ChatGPT seem conscious, they are most likely not. First, the inputs to language models lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Secondly, the architectures of present-day AI algorithms are missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today.

 

The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness. Thus, while it is tempting to assume that ChatGPT and similar systems might be conscious, this would severely underestimate the complexity of the neural mechanisms that generate consciousness in our brains.

 

Researchers do not have a consensus on how consciousness rises in our brains. What we know, and what this new paper points out, is that the mechanisms are likely way more complex than the mechanisms underlying current language models. For instance, as pointed out in this work, real neurons are not akin neurons in artificial neural networks. Biological neurons are real physical entities, which can grow and change shape, whereas neurons in large language models are just meaningless pieces of code. We still have a long way to understand consciousness and, hence, a long way to conscious machines.

Read the full article at: neurosciencenews.com

Tesla Competition: China Planning to Roll Out Humanoid Robots by 2025

The Chinese government will accelerate the widespread production of advanced humanoid robots by funding more startups in the robotics field.

 

Fourier Intelligence

China is hoping to welcome robotkind in just two years’ time. The country plans to produce its first humanoid robots by 2025, according to an ambitious blueprint published by the Ministry of Industry and Information (MITT) Technology last week. The MITT says the advanced bipedal droids have the power to reshape the world, carrying out menial, repetitive tasks in farms, factories, and houses to alleviate our workload.

 

“They are expected to become disruptive products after computers, smartphones, and new energy vehicles,” the document states.The government will accelerate the development of the robots by funding more young companies in the field, as reported by BloombergFourier Intelligence is one such Chinese startup hoping to start mass-producing general-purpose humanoid robots by the end of this year. The Fourier GR-1 measures five feet and four inches and weighs around 121 pounds. With 40 joints, the bot reportedly has “unparalleled agility” human-like movement. It can also walk at roughly 3 mph and complete basic tasks.

 

China isn’t the only country working on our future robot helpers, of course. In the U.S., Tesla is continuing to refine Optimus. The bipedal humanoid robot has progressed rapidly since the first shaky prototype was revealed at the marque’s AI day in 2022. It can now do yoga, in fact. Tesla has yet to announce a firm timetable for when Optimus will hit the market, but CEO Elon Musk has previously said that the $20,000 robot could be ready in three to five years.

 

Agility Robotics is another U.S. company with “building robots for good.” It opened a robot manufacturing facility in Oregon earlier this year that can produce more than 10,000 Digit droids per year. It also recently announced that Amazon will begin testing Digit for use in their operations.

 

Meanwhile, Boston Dynamics—makers of Spot, the $75,000 robotic dog—has built another decidedly agile bipedal robot. Atlas showed it could move various obstacles earlier this year, after nailing a parkour course in 2021. Boston Dynamic’s Atlas is a research platform and not available for purchase, but the robot does show the U.S. is on par with China in terms of droid design.

Read the full article at: robbreport.com

Brain cells control how fast you eat — and when you stop

 
 

Scientists found the cells in mice — and say they could lead to a better understanding of human appetite.

 

Brain cells that control how quickly mice eat, and when they stop, have been identified. The findings, published in Nature1, could lead to a better understanding of human appetite, the researchers say.

 

Nerves in the gut, called vagal nerves, had already been shown to sense how much mice have eaten and what nutrients they have consumed2. The vagal nerves use electrical signals to pass this information to a small region in the brainstem that is thought to influence when mice, and humans, stop eating. This region, called the caudal nucleus of the solitary tract, contains prolactin-releasing hormone neurons (PRLH) and GCG neurons. But, until now, studies have involved filling the guts of anaesthetized mice with liquid food, making it unclear how these neurons regulate appetite when mice are awake.

 

To answer this question, physiologist Zachary Knight at the University of California, San Francisco, and his colleagues implanted a light sensor in the brains of mice that had been genetically modified so that the PRLH neurons released a fluorescent signal when activated by electrical signals transmitted along neurons from elsewhere in the body. Knight and his team infused a liquid food called Ensure — which contains a mixture of fat, protein, sugar, vitamins and minerals — into the guts of these mice. Over a ten-minute period, the neurons became increasingly activated as more of the food was infused. This activity peaked a few minutes after the infusion ended. By contrast, the PRLH neurons did not activate when the team infused saline solution into the mice’s guts.

 

When the team allowed the mice to freely eat liquid food, the PRLH neurons activated within seconds of the animals starting to lick the food, but deactivated when they stopped licking. This showed that PRLH neurons respond differently, depending on whether signals are coming from the mouth or the gut, and suggests that signals from the mouth override those from the gut, says Knight. By using a laser to activate PRLH neurons in mice that were eating freely, the researchers could reduce how quickly the mice ate.

 

Further experiments showed that PRLH neurons did not activate during feeding in mice that lacked most of their ability to taste sweetness, suggesting that taste activated the neurons. The researchers also found that GCG neurons are activated by signals from the gut, and control when mice stop eating. “The signals from the mouth are controlling how fast you eat, and the signals from the gut are controlling how much you eat,” says Knight.

 

“I’m extremely impressed by this paper,” says neuroscientist Chen Ran at Harvard University in Boston, Massachusetts. The work provides original insights on how taste regulates appetite, he says. The findings probably apply to humans, too, Ran adds, because these neural circuits tend to be well conserved across both species.

Read the full article at: www.nature.com

UK first to approve CRISPR treatment for human diseases

 

The landmark decision could transform the treatment of sickle-cell disease and β-thalassaemia — but the technology is expensive.

 

In a world first, the UK medicines regulator has approved a therapy that uses the CRISPR–Cas9 gene-editing tool as a treatment. The decision marks another high point for a biotechnology that has been lauded as revolutionary in the decade since its discovery.

 

The therapy, called Casgevy, will treat the blood conditions sickle-cell disease and β-thalassaemia. Sickle-cell disease, also known as sickle-cell anaemia, can cause debilitating pain, and people with β-thalassaemia often require regular blood transfusions.

 

“This is a landmark approval which opens the door for further applications of CRISPR therapies in the future for the potential cure of many genetic diseases,” said Kay Davies, a geneticist at the University of Oxford, UK, in comments to the UK Science Media Centre (SMC).

 

Nature magazine explains the research behind the treatment and explores what’s next.

What research led to the approval?

The approval by the Medicines and Healthcare products Regulatory Agency (MHRA) follows promising results from clinical trials that tested a one-time treatment, which is administered by intravenous infusion. The therapy was developed by the pharmaceutical company Vertex Pharmaceuticals in Boston, Massachusetts, and biotechnology company CRISPR Therapeutics in Zug, Switzerland.

 

The trial for sickle-cell disease has followed 29 out of 45 participants long enough to draw interim results. Casgevy completely relieved 28 of those people of debilitating episodes of pain for at least one year after treatment. Researchers also tested the treatment for a severe form of β-thalassaemia, which is conventionally treated with blood transfusions roughly once a month. In this trial, 54 people received Casgevy, of which 42 participated for long enough to provide interim results. Among those 42 participants, 39 did not need a red-blood-cell transfusion for at least one year. The remaining three had their need for blood transfusions reduced by more than a 70%.

Read the full article at: www.nature.com

The GPT to rule them all: ‘ScienceGPT’ is being trained from data from the Aurora supercomputer

Scientists are training a gargantuan one-trillion-parameter generative AI system dubbed ‘ScienceGPT’ based on scientific data from the newly established Aurora supercomputer.

 

The AuroraGPT AI model, which is being trained by researchers at the Argonne National Lab (ALN) in Illinois, USA, is powered by Intel‘s Ponte Vecchio GPUs which provide the main computing power, and is being backed by the US government. Training could take months to complete, according to HPC Wire, with training currently limited to 256 of the roughly 10,000 nodes of the Aurora supercomputer, before this is scaled up over time. Even given this limitation, Intel and ANL are only testing the model training on a string of 64 nodes, with caution due to Aurora’s unique design as a supercomputer.

 

At one trillion parameters, ScienceGPT will be one of the largest LLMs out there. While it won’t quite hit the size of the reported 1.7-trillion-parameter GPT-4, developed by OpenAI, it’ll be almost twice as large as the 560-billion-parameter Pathways Language Model, which powers Google‘s Bard. “It combines all the text, codes, specific scientific results, papers, into the model that science can use to speed up research,” said Ogi Brkic, vice president and general manager for data center and HPC solutions, in a press briefing. It’ll operate like ChatGPT, but it’s yet unclear at the moment whether it will be multimodal, in that it will generate different kinds of media like text, images, and video. 

 
 

 

Aurora  – which will be the second exascale supercomputer in US history – has just established itself on the Top500 list of the most powerful supercomputers after years of being developed.  It’s the second-most powerful supercomputer after Froniter, and is powered by 60,000 Intel GPUs while boating 10,000 computing nodes over 166 racks, alongside more than 80,000 networking nodes.  It is still being finished, however, and will likely exceed Frontier’s performance when it’s fully up to speed, and all testing and finetuning is complete, said Top500.

Read the full article at: www.techradar.com

Neuroscientists Re-create Pink Floyd Song from Listeners’ Brain Activity

 

For the first time, scientists have demonstrated that the brain’s electrical activity can be decoded and used to reconstruct music.
Artificial intelligence has turned the brain’s electrical signals into somewhat garbled classic rock”

“Neuroscientists have reconstructed recognizable audio of a 1979 Pink Floyd song by using machine learning to decode electrical activity in the brains of listeners. As study participants undergoing surgery listened to “Another Brick in the Wall (Part 1),” electrodes placed on the surface of the brain captured the activity of regions attuned to the song’s acoustic profile. “

 

Neuroscientists have worked for decades to decode what people are seeing, hearing or thinking from brain activity alone. In 2012 a team that included the new study’s senior author—cognitive neuroscientist Robert Knight of the University of California, Berkeley—became the first to successfully reconstruct audio recordings of words participants heard while wearing implanted electrodes. Others have since used similar techniques to reproduce recently viewed or imagined pictures from participants’ brain scans, including human faces and landscape photographs. But the recent PLOS Biology paper by Knight and his colleagues is the first to suggest that scientists can eavesdrop on the brain to synthesize music.

 

“These exciting findings build on previous work to reconstruct plain speech from brain activity,” says Shailee Jain, a neuroscientist at the University of California, San Francisco, who was not involved in the new study. “Now we’re able to really dig into the brain to unearth the sustenance of sound.”

 

To turn brain activity data into musical sound in the study, the researchers trained an artificial intelligence model to decipher data captured from thousands of electrodes that were attached to the participants as they listened to the Pink Floyd song while undergoing surgery. Why did the team choose Pink Floyd—and specifically “Another Brick in the Wall (Part 1),”? “The scientific reason, which we mention in the paper, is that the song is very layered. It brings in complex chords, different instruments and diverse rhythms that make it interesting to analyze,” says Ludovic Bellier, a cognitive neuroscientist and the study’s lead author. “The less scientific reason might be that we just really like Pink Floyd.”

Read the full article at: www.scientificamerican.com