Google Reveals Gemini, Its Much-Anticipated Large Language Model

Google’s Gemini is available to consumers in Bard or Pixel 8 Pro now, with an enterprise model coming Dec. 13. Get more details about the LLM.

Gemini is available to consumers in Bard or Pixel 8 Pro now, with an enterprise model coming Dec. 13.

Google has revealed Gemini, its long-rumored large language model and rival to GPT-4. Global users of Google Bard and the Pixel 8 Pro will be able to run Gemini starting now; an enterprise product, Gemini Pro, is coming on Dec. 13. Developers can sign up now for an early preview in Android AICore.

Jump to:

What is Gemini?

Gemini is a large language model that runs generative artificial intelligence applications; it can summarize text, create images and answer questions. Gemini was trained on Google’s Tensor Processing Units v4 and v5e.

Google’s Bard is a generative AI based on the PaLM large language mode. Starting today, Gemini will be used to give Bard “more advanced reasoning, planning, understanding and more,” according to a Google press release.

SEE: Microsoft invested $3.2 billion on AI in the UK. (TechRepublic) 

Gemini size options

Gemini comes in three model sizes: Ultra, Pro and Nano. Ultra is the most capable, Nano is the smallest and most efficient, and Pro sits in the middle for general tasks. The Nano version is what Google is using on the Pixel, while Bard gets Pro. Google says it plans to run “extensive trust and safety checks” before releasing Gemini Ultra to select groups.

Gemini for coding

Gemini can code in Python, Java, C++, Go and other popular programming languages. Google used Gemini to upgrade Google’s AI-powered code generation system, AlphaCode.

Gemini will be added to more Google products

Next, Google plans to bring Gemini to Ads, Chrome and Duet AI. In the future, Gemini will be used in Google Search as well.

Competitors to Gemini

Gemini and the products built with it, such as chatbots, will compete with OpenAI’s GPT-4, Microsoft’s Copilot (which is based on OpenAI’s GPT-4), Anthropic’s Claude AI, Meta’s Llama 2 and more. Google claims Gemini Ultra outperforms GPT-4 in several benchmarks, including the massive multitask language understanding general knowledge test and in Python code generation.

Does Gemini have an enterprise product?

Starting Dec. 13, enterprise customers and developers will be able to access Gemini Pro through the Gemini API in Google’s Vertex AI or Google AI Studio.

Google expects Gemini Nano to be generally available for developers and enterprise customers in early 2024. Android developers can use this LLM to build Gemini apps on-device through AndroidAICore.

Possible enterprise use cases for Gemini

More must-read AI coverage

Of particular interest to enterprise use cases might be Gemini’s ability to “understand and reason about users’ intent,” said Palash Nandy, engineering director at Google, in a demonstration video. Gemini generates a bespoke UI depending on whether the user is looking for images or text. In the same UI, Gemini will flag areas in which it doesn’t have enough information and ask for clarification. Through the bespoke UI, the user can explore other options with increasing detail.

Gemini has been trained on multimodal content from the very beginning instead of starting with text and expanding to audio, images and video later, letting Gemini parse written or visual information with equal acuity. One example of how this might be useful for business Google provides is the prompt “Could Gemini help make a demo based on this video?” in which the AI translates video content to an original animation.

Gemini’s timing compared to other popular LLMs

Gemini has been hotly rumored, as Google tries to compete with OpenAI. The New York Times reported Google executives were “shaken” by OpenAI’s tech in January 2023. More recently, Google supposedly struggled with releasing Gemini in languages other than English, leading to a delay of an in-person launch event.

However, releasing Google’s own large language model after ChatGPT has received gradual GPT-4 powered updates for nearly a year means Google has the advantage of leapfrogging the last year of AI development. For example, Gemini is multimodal (i.e., able to work with text, video, speech and code) and lives natively on the Google Pixel 8. Users can access Gemini on their Google Pixel 8 without an internet connection, unlike ChatGPT, which started out in a browser.

Read the full article at:

Is AI Mimicking Consciousness or Truly Becoming Aware Gradually?


AI’s remarkable abilities, like those seen in ChatGPT, often seem conscious due to their human-like interactions.


The question is whether the language model also perceives our text when we prompt it. Or is it just a zombie, working based on clever pattern-matching algorithms? Based on the text it generates, it is easy to be swayed that the system might be conscious. However, in this new research, Jaan Aru, Matthew Larkum and Mac Shine take a neuroscientific angle to answer this question.


All three being neuroscientists, these authors argue that although the responses of systems like ChatGPT seem conscious, they are most likely not. First, the inputs to language models lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Secondly, the architectures of present-day AI algorithms are missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today.


The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness. Thus, while it is tempting to assume that ChatGPT and similar systems might be conscious, this would severely underestimate the complexity of the neural mechanisms that generate consciousness in our brains.


Researchers do not have a consensus on how consciousness rises in our brains. What we know, and what this new paper points out, is that the mechanisms are likely way more complex than the mechanisms underlying current language models. For instance, as pointed out in this work, real neurons are not akin neurons in artificial neural networks. Biological neurons are real physical entities, which can grow and change shape, whereas neurons in large language models are just meaningless pieces of code. We still have a long way to understand consciousness and, hence, a long way to conscious machines.

Read the full article at:

Tesla Competition: China Planning to Roll Out Humanoid Robots by 2025

The Chinese government will accelerate the widespread production of advanced humanoid robots by funding more startups in the robotics field.


Fourier Intelligence

China is hoping to welcome robotkind in just two years’ time. The country plans to produce its first humanoid robots by 2025, according to an ambitious blueprint published by the Ministry of Industry and Information (MITT) Technology last week. The MITT says the advanced bipedal droids have the power to reshape the world, carrying out menial, repetitive tasks in farms, factories, and houses to alleviate our workload.


“They are expected to become disruptive products after computers, smartphones, and new energy vehicles,” the document states.The government will accelerate the development of the robots by funding more young companies in the field, as reported by BloombergFourier Intelligence is one such Chinese startup hoping to start mass-producing general-purpose humanoid robots by the end of this year. The Fourier GR-1 measures five feet and four inches and weighs around 121 pounds. With 40 joints, the bot reportedly has “unparalleled agility” human-like movement. It can also walk at roughly 3 mph and complete basic tasks.


China isn’t the only country working on our future robot helpers, of course. In the U.S., Tesla is continuing to refine Optimus. The bipedal humanoid robot has progressed rapidly since the first shaky prototype was revealed at the marque’s AI day in 2022. It can now do yoga, in fact. Tesla has yet to announce a firm timetable for when Optimus will hit the market, but CEO Elon Musk has previously said that the $20,000 robot could be ready in three to five years.


Agility Robotics is another U.S. company with “building robots for good.” It opened a robot manufacturing facility in Oregon earlier this year that can produce more than 10,000 Digit droids per year. It also recently announced that Amazon will begin testing Digit for use in their operations.


Meanwhile, Boston Dynamics—makers of Spot, the $75,000 robotic dog—has built another decidedly agile bipedal robot. Atlas showed it could move various obstacles earlier this year, after nailing a parkour course in 2021. Boston Dynamic’s Atlas is a research platform and not available for purchase, but the robot does show the U.S. is on par with China in terms of droid design.

Read the full article at:

Brain cells control how fast you eat — and when you stop


Scientists found the cells in mice — and say they could lead to a better understanding of human appetite.


Brain cells that control how quickly mice eat, and when they stop, have been identified. The findings, published in Nature1, could lead to a better understanding of human appetite, the researchers say.


Nerves in the gut, called vagal nerves, had already been shown to sense how much mice have eaten and what nutrients they have consumed2. The vagal nerves use electrical signals to pass this information to a small region in the brainstem that is thought to influence when mice, and humans, stop eating. This region, called the caudal nucleus of the solitary tract, contains prolactin-releasing hormone neurons (PRLH) and GCG neurons. But, until now, studies have involved filling the guts of anaesthetized mice with liquid food, making it unclear how these neurons regulate appetite when mice are awake.


To answer this question, physiologist Zachary Knight at the University of California, San Francisco, and his colleagues implanted a light sensor in the brains of mice that had been genetically modified so that the PRLH neurons released a fluorescent signal when activated by electrical signals transmitted along neurons from elsewhere in the body. Knight and his team infused a liquid food called Ensure — which contains a mixture of fat, protein, sugar, vitamins and minerals — into the guts of these mice. Over a ten-minute period, the neurons became increasingly activated as more of the food was infused. This activity peaked a few minutes after the infusion ended. By contrast, the PRLH neurons did not activate when the team infused saline solution into the mice’s guts.


When the team allowed the mice to freely eat liquid food, the PRLH neurons activated within seconds of the animals starting to lick the food, but deactivated when they stopped licking. This showed that PRLH neurons respond differently, depending on whether signals are coming from the mouth or the gut, and suggests that signals from the mouth override those from the gut, says Knight. By using a laser to activate PRLH neurons in mice that were eating freely, the researchers could reduce how quickly the mice ate.


Further experiments showed that PRLH neurons did not activate during feeding in mice that lacked most of their ability to taste sweetness, suggesting that taste activated the neurons. The researchers also found that GCG neurons are activated by signals from the gut, and control when mice stop eating. “The signals from the mouth are controlling how fast you eat, and the signals from the gut are controlling how much you eat,” says Knight.


“I’m extremely impressed by this paper,” says neuroscientist Chen Ran at Harvard University in Boston, Massachusetts. The work provides original insights on how taste regulates appetite, he says. The findings probably apply to humans, too, Ran adds, because these neural circuits tend to be well conserved across both species.

Read the full article at:

UK first to approve CRISPR treatment for human diseases


The landmark decision could transform the treatment of sickle-cell disease and β-thalassaemia — but the technology is expensive.


In a world first, the UK medicines regulator has approved a therapy that uses the CRISPR–Cas9 gene-editing tool as a treatment. The decision marks another high point for a biotechnology that has been lauded as revolutionary in the decade since its discovery.


The therapy, called Casgevy, will treat the blood conditions sickle-cell disease and β-thalassaemia. Sickle-cell disease, also known as sickle-cell anaemia, can cause debilitating pain, and people with β-thalassaemia often require regular blood transfusions.


“This is a landmark approval which opens the door for further applications of CRISPR therapies in the future for the potential cure of many genetic diseases,” said Kay Davies, a geneticist at the University of Oxford, UK, in comments to the UK Science Media Centre (SMC).


Nature magazine explains the research behind the treatment and explores what’s next.

What research led to the approval?

The approval by the Medicines and Healthcare products Regulatory Agency (MHRA) follows promising results from clinical trials that tested a one-time treatment, which is administered by intravenous infusion. The therapy was developed by the pharmaceutical company Vertex Pharmaceuticals in Boston, Massachusetts, and biotechnology company CRISPR Therapeutics in Zug, Switzerland.


The trial for sickle-cell disease has followed 29 out of 45 participants long enough to draw interim results. Casgevy completely relieved 28 of those people of debilitating episodes of pain for at least one year after treatment. Researchers also tested the treatment for a severe form of β-thalassaemia, which is conventionally treated with blood transfusions roughly once a month. In this trial, 54 people received Casgevy, of which 42 participated for long enough to provide interim results. Among those 42 participants, 39 did not need a red-blood-cell transfusion for at least one year. The remaining three had their need for blood transfusions reduced by more than a 70%.

Read the full article at:

The GPT to rule them all: ‘ScienceGPT’ is being trained from data from the Aurora supercomputer

Scientists are training a gargantuan one-trillion-parameter generative AI system dubbed ‘ScienceGPT’ based on scientific data from the newly established Aurora supercomputer.


The AuroraGPT AI model, which is being trained by researchers at the Argonne National Lab (ALN) in Illinois, USA, is powered by Intel‘s Ponte Vecchio GPUs which provide the main computing power, and is being backed by the US government. Training could take months to complete, according to HPC Wire, with training currently limited to 256 of the roughly 10,000 nodes of the Aurora supercomputer, before this is scaled up over time. Even given this limitation, Intel and ANL are only testing the model training on a string of 64 nodes, with caution due to Aurora’s unique design as a supercomputer.


At one trillion parameters, ScienceGPT will be one of the largest LLMs out there. While it won’t quite hit the size of the reported 1.7-trillion-parameter GPT-4, developed by OpenAI, it’ll be almost twice as large as the 560-billion-parameter Pathways Language Model, which powers Google‘s Bard. “It combines all the text, codes, specific scientific results, papers, into the model that science can use to speed up research,” said Ogi Brkic, vice president and general manager for data center and HPC solutions, in a press briefing. It’ll operate like ChatGPT, but it’s yet unclear at the moment whether it will be multimodal, in that it will generate different kinds of media like text, images, and video. 



Aurora  – which will be the second exascale supercomputer in US history – has just established itself on the Top500 list of the most powerful supercomputers after years of being developed.  It’s the second-most powerful supercomputer after Froniter, and is powered by 60,000 Intel GPUs while boating 10,000 computing nodes over 166 racks, alongside more than 80,000 networking nodes.  It is still being finished, however, and will likely exceed Frontier’s performance when it’s fully up to speed, and all testing and finetuning is complete, said Top500.

Read the full article at:

Neuroscientists Re-create Pink Floyd Song from Listeners’ Brain Activity


For the first time, scientists have demonstrated that the brain’s electrical activity can be decoded and used to reconstruct music.
Artificial intelligence has turned the brain’s electrical signals into somewhat garbled classic rock”

“Neuroscientists have reconstructed recognizable audio of a 1979 Pink Floyd song by using machine learning to decode electrical activity in the brains of listeners. As study participants undergoing surgery listened to “Another Brick in the Wall (Part 1),” electrodes placed on the surface of the brain captured the activity of regions attuned to the song’s acoustic profile. “


Neuroscientists have worked for decades to decode what people are seeing, hearing or thinking from brain activity alone. In 2012 a team that included the new study’s senior author—cognitive neuroscientist Robert Knight of the University of California, Berkeley—became the first to successfully reconstruct audio recordings of words participants heard while wearing implanted electrodes. Others have since used similar techniques to reproduce recently viewed or imagined pictures from participants’ brain scans, including human faces and landscape photographs. But the recent PLOS Biology paper by Knight and his colleagues is the first to suggest that scientists can eavesdrop on the brain to synthesize music.


“These exciting findings build on previous work to reconstruct plain speech from brain activity,” says Shailee Jain, a neuroscientist at the University of California, San Francisco, who was not involved in the new study. “Now we’re able to really dig into the brain to unearth the sustenance of sound.”


To turn brain activity data into musical sound in the study, the researchers trained an artificial intelligence model to decipher data captured from thousands of electrodes that were attached to the participants as they listened to the Pink Floyd song while undergoing surgery. Why did the team choose Pink Floyd—and specifically “Another Brick in the Wall (Part 1),”? “The scientific reason, which we mention in the paper, is that the song is very layered. It brings in complex chords, different instruments and diverse rhythms that make it interesting to analyze,” says Ludovic Bellier, a cognitive neuroscientist and the study’s lead author. “The less scientific reason might be that we just really like Pink Floyd.”

Read the full article at:

Unlocking the secrets of spin with high-harmonic probes (Heusler compound)


Deep within every piece of magnetic material, electrons dance to the invisible tune of quantum mechanics. Their spins, akin to tiny atomic tops, dictate the magnetic behavior of the material they inhabit. This microscopic ballet is the cornerstone of magnetic phenomena, and it’s these spins that a team of JILA researchers—headed by JILA Fellows and University of Colorado Boulder professors Margaret Murnane and Henry Kapteyn—has learned to control with remarkable precision, potentially redefining the future of electronics and data storage.



In a Science Advances publication, the JILA team—along with collaborators from universities in Sweden, Greece, and Germany—probed the spin dynamics within a special material known as a Heusler compound: a mixture of metals that behaves like a single magnetic material.


For this study, the researchers utilized a compound of cobalt, manganese, and gallium, which behaved as a conductor for electrons whose spins were aligned upwards and as an insulator for electrons whose spins were aligned downwards. Using a form of light called extreme ultraviolet high-harmonic generation (EUV HHG) as a probe, the researchers could track the re-orientations of the spins inside the compound after exciting it with a femtosecond laser, which caused the sample to change its magnetic properties. The key to accurately interpreting the spin re-orientations was the ability to tune the color of the EUV HHG probe light.


“In the past, people haven’t done this color tuning of HHG,” explained co-first author and JILA graduate student Sinéad Ryan. “Usually, scientists only measured the signal at a few different colors, maybe one or two per magnetic element at most.” In a monumental first, the JILA team tuned their EUV HHG light probe across the magnetic resonances of each element within the compound to track the spin changes with a precision down to femtoseconds (a quadrillionth of a second).


“On top of that, we also changed the laser excitation fluence, so we were changing how much power we used to manipulate the spins,” Ryan elaborated, highlighting that that step was also an experimental first for this type of research.

Read the full article at:

New antifungal molecule kills fungi without toxicity to human and murine cells


Terrible to terrific: A new antifungal molecule tweaks a powerful drug to harness its power against infection while doing away with its toxicity.


A new antifungal molecule, devised by tweaking the structure of prominent antifungal drug Amphotericin B, has the potential to harness the drug’s power against fungal infections while doing away with its toxicity, researchers at the University of Illinois Urbana-Champaign and collaborators at the University of Wisconsin-Madison report in the journal Nature.


Amphotericin B, a naturally occurring small molecule produced by bacteria, is a drug used as a last resort to treat fungal infections. While AmB excels at killing fungi, it is reserved as a last line of defense because it also is toxic to the human patient – particularly the kidneys. 


“Fungal infections are a public health crisis that is only getting worse. And they have the potential, unfortunately, of breaking out and having an exponential impact, kind of like COVID-19 did. So let’s take one of the powerful tools that nature developed to combat fungi and turn it into a powerful ally,” said research leader Dr. Martin D. Burke, an Illinois professor of chemistry, a professor in the Carle Illinois College of Medicine and also a medical doctor. 


“This work is a demonstration that, by going deep into the fundamental science, you can take a billion-year head start from nature and turn it into something that hopefully is going to have a big impact on human health,” Burke said. 



Burke’s group has spent years exploring AmB in hopes of making a derivative that can kill fungi without harm to humans. In previous studies, they developed and leveraged a building block-based approach to molecular synthesis and teamed up with a group specializing in molecular imaging tools called solid-state nuclear magnetic resonance, led by professor Chad Rienstra at the University of Wisconsin-Madison. Together, the teams uncovered the mechanism of the drug: AmB kills fungi by acting like a sponge to extract ergosterol from fungal cells. 


In the recent work, Burke’s group worked again with Rienstra’s group to find that AmB similarly kills human kidney cells by extracting cholesterol, the most common sterol in people. The researchers also resolved the atomic-level structure of AmB sponges when bound to both ergosterol and to cholesterol. 


“The atomic resolution models were really the key to zoom in and identify these very subtle differences in binding interactions between AmB and each of these sterols,” said Illinois graduate student Corinne Soutar, a co-first author of the paper. “Using this structural information along with functional and computational studies, we achieved a significant breakthrough in understanding how AmB functions as a potent fungicidal drug,” Rienstra said. “This provided the insights to modify AmB and tune its binding properties, reducing its interaction with cholesterol and thereby reducing the toxicity.” 

Read the full article at:

If You Had Gamma Ray Eyes the Moon Would Glow Brighter Than the Sun


If our eyes could see high-energy radiation called gamma rays, the Moon would appear brighter than the Sun! That’s how NASA’s Fermi Gamma-ray Space Telescope has seen our neighbor in space for the past decade. Gamma-ray observations are not sensitive enough to clearly see the shape of the Moon’s disk or any surface features. Instead, Fermi’s Large Area Telescope (LAT) detects a prominent glow centered on the Moon’s position in the sky.


Mario Nicola Mazziotta and Francesco Loparco, both at Italy’s National Institute of Nuclear Physics in Bari, have been analyzing the Moon’s gamma-ray glow as a way of better understanding another type of radiation from space: fast-moving particles called cosmic rays. “Cosmic rays are mostly protons accelerated by some of the most energetic phenomena in the universe, like the blast waves of exploding stars and jets produced when matter falls into black holes,” explained Mazziotta.


Because the particles are electrically charged, they’re strongly affected by magnetic fields, which the Moon lacks. As a result, even low-energy cosmic rays can reach the surface, turning the Moon into a handy space-based particle detector. When cosmic rays strike, they interact with the powdery surface of the Moon, called the regolith, to produce gamma-ray emission. The Moon absorbs most of these gamma rays, but some of them escape.


Mazziotta and Loparco analyzed Fermi LAT lunar observations to show how the view has improved during the mission. They rounded up data for gamma rays with energies above 31 million electron volts — more than 10 million times greater than the energy of visible light — and organized them over time, showing how longer exposures improve the view.


“Seen at these energies, the Moon would never go through its monthly cycle of phases and would always look full,” said Loparco. As NASA sets its sights on sending humans to the Moon by 2024 through the Artemis program, with the eventual goal of sending astronauts to Mars, understanding various aspects of the lunar environment take on new importance. These gamma-ray observations are a reminder that astronauts on the Moon will require protection from the same cosmic rays that produce this high-energy gamma radiation.


While the Moon’s gamma-ray glow is surprising and impressive, the Sun does shine brighter in gamma rays with energies higher than 1 billion electron volts. Cosmic rays with lower energies do not reach the Sun because its powerful magnetic field screens them out. But much more energetic cosmic rays can penetrate this magnetic shield and strike the Sun’s denser atmosphere, producing gamma rays that can reach Fermi.


Although the gamma-ray Moon doesn’t show a monthly cycle of phases, its brightness does change over time. Fermi LAT data show that the Moon’s brightness varies by about 20% over the Sun’s 11-year activity cycle. Variations in the intensity of the Sun’s magnetic field during the cycle change the rate of cosmic rays reaching the Moon, altering the production of gamma rays.


Download the graphic and related multimedia in HD formats from NASA Goddard’s Scientific Visualization Studio

Read the full article at: