The AI Sommelier: AI Has Revolutionized the Wine Industry and How Your Industry Benefit

The integration of Artificial Intelligence (AI) in the wine industry marks a significant shift from traditional viticulture and enology practices to a more technologically advanced approach. AI’s application ranges from vineyard management to winemaking processes, ultimately impacting the quality, efficiency, and sustainability of wine production.

Read the full article at: www.forbes.com

Google Reveals Gemini, Its Much-Anticipated Large Language Model

Google’s Gemini is available to consumers in Bard or Pixel 8 Pro now, with an enterprise model coming Dec. 13. Get more details about the LLM.

Gemini is available to consumers in Bard or Pixel 8 Pro now, with an enterprise model coming Dec. 13.

Google has revealed Gemini, its long-rumored large language model and rival to GPT-4. Global users of Google Bard and the Pixel 8 Pro will be able to run Gemini starting now; an enterprise product, Gemini Pro, is coming on Dec. 13. Developers can sign up now for an early preview in Android AICore.

Jump to:

What is Gemini?

Gemini is a large language model that runs generative artificial intelligence applications; it can summarize text, create images and answer questions. Gemini was trained on Google’s Tensor Processing Units v4 and v5e.

Google’s Bard is a generative AI based on the PaLM large language mode. Starting today, Gemini will be used to give Bard “more advanced reasoning, planning, understanding and more,” according to a Google press release.

SEE: Microsoft invested $3.2 billion on AI in the UK. (TechRepublic) 

Gemini size options

Gemini comes in three model sizes: Ultra, Pro and Nano. Ultra is the most capable, Nano is the smallest and most efficient, and Pro sits in the middle for general tasks. The Nano version is what Google is using on the Pixel, while Bard gets Pro. Google says it plans to run “extensive trust and safety checks” before releasing Gemini Ultra to select groups.

Gemini for coding

Gemini can code in Python, Java, C++, Go and other popular programming languages. Google used Gemini to upgrade Google’s AI-powered code generation system, AlphaCode.

Gemini will be added to more Google products

Next, Google plans to bring Gemini to Ads, Chrome and Duet AI. In the future, Gemini will be used in Google Search as well.

Competitors to Gemini

Gemini and the products built with it, such as chatbots, will compete with OpenAI’s GPT-4, Microsoft’s Copilot (which is based on OpenAI’s GPT-4), Anthropic’s Claude AI, Meta’s Llama 2 and more. Google claims Gemini Ultra outperforms GPT-4 in several benchmarks, including the massive multitask language understanding general knowledge test and in Python code generation.

Does Gemini have an enterprise product?

Starting Dec. 13, enterprise customers and developers will be able to access Gemini Pro through the Gemini API in Google’s Vertex AI or Google AI Studio.

Google expects Gemini Nano to be generally available for developers and enterprise customers in early 2024. Android developers can use this LLM to build Gemini apps on-device through AndroidAICore.

Possible enterprise use cases for Gemini

More must-read AI coverage

Of particular interest to enterprise use cases might be Gemini’s ability to “understand and reason about users’ intent,” said Palash Nandy, engineering director at Google, in a demonstration video. Gemini generates a bespoke UI depending on whether the user is looking for images or text. In the same UI, Gemini will flag areas in which it doesn’t have enough information and ask for clarification. Through the bespoke UI, the user can explore other options with increasing detail.

Gemini has been trained on multimodal content from the very beginning instead of starting with text and expanding to audio, images and video later, letting Gemini parse written or visual information with equal acuity. One example of how this might be useful for business Google provides is the prompt “Could Gemini help make a demo based on this video?” in which the AI translates video content to an original animation.

Gemini’s timing compared to other popular LLMs

Gemini has been hotly rumored, as Google tries to compete with OpenAI. The New York Times reported Google executives were “shaken” by OpenAI’s tech in January 2023. More recently, Google supposedly struggled with releasing Gemini in languages other than English, leading to a delay of an in-person launch event.

However, releasing Google’s own large language model after ChatGPT has received gradual GPT-4 powered updates for nearly a year means Google has the advantage of leapfrogging the last year of AI development. For example, Gemini is multimodal (i.e., able to work with text, video, speech and code) and lives natively on the Google Pixel 8. Users can access Gemini on their Google Pixel 8 without an internet connection, unlike ChatGPT, which started out in a browser.

Read the full article at: www.techrepublic.com

Is AI Mimicking Consciousness or Truly Becoming Aware Gradually?

 

AI’s remarkable abilities, like those seen in ChatGPT, often seem conscious due to their human-like interactions.

 

The question is whether the language model also perceives our text when we prompt it. Or is it just a zombie, working based on clever pattern-matching algorithms? Based on the text it generates, it is easy to be swayed that the system might be conscious. However, in this new research, Jaan Aru, Matthew Larkum and Mac Shine take a neuroscientific angle to answer this question.

 

All three being neuroscientists, these authors argue that although the responses of systems like ChatGPT seem conscious, they are most likely not. First, the inputs to language models lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Secondly, the architectures of present-day AI algorithms are missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today.

 

The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness. Thus, while it is tempting to assume that ChatGPT and similar systems might be conscious, this would severely underestimate the complexity of the neural mechanisms that generate consciousness in our brains.

 

Researchers do not have a consensus on how consciousness rises in our brains. What we know, and what this new paper points out, is that the mechanisms are likely way more complex than the mechanisms underlying current language models. For instance, as pointed out in this work, real neurons are not akin neurons in artificial neural networks. Biological neurons are real physical entities, which can grow and change shape, whereas neurons in large language models are just meaningless pieces of code. We still have a long way to understand consciousness and, hence, a long way to conscious machines.

Read the full article at: neurosciencenews.com

Tesla Competition: China Planning to Roll Out Humanoid Robots by 2025

The Chinese government will accelerate the widespread production of advanced humanoid robots by funding more startups in the robotics field.

 

Fourier Intelligence

China is hoping to welcome robotkind in just two years’ time. The country plans to produce its first humanoid robots by 2025, according to an ambitious blueprint published by the Ministry of Industry and Information (MITT) Technology last week. The MITT says the advanced bipedal droids have the power to reshape the world, carrying out menial, repetitive tasks in farms, factories, and houses to alleviate our workload.

 

“They are expected to become disruptive products after computers, smartphones, and new energy vehicles,” the document states.The government will accelerate the development of the robots by funding more young companies in the field, as reported by BloombergFourier Intelligence is one such Chinese startup hoping to start mass-producing general-purpose humanoid robots by the end of this year. The Fourier GR-1 measures five feet and four inches and weighs around 121 pounds. With 40 joints, the bot reportedly has “unparalleled agility” human-like movement. It can also walk at roughly 3 mph and complete basic tasks.

 

China isn’t the only country working on our future robot helpers, of course. In the U.S., Tesla is continuing to refine Optimus. The bipedal humanoid robot has progressed rapidly since the first shaky prototype was revealed at the marque’s AI day in 2022. It can now do yoga, in fact. Tesla has yet to announce a firm timetable for when Optimus will hit the market, but CEO Elon Musk has previously said that the $20,000 robot could be ready in three to five years.

 

Agility Robotics is another U.S. company with “building robots for good.” It opened a robot manufacturing facility in Oregon earlier this year that can produce more than 10,000 Digit droids per year. It also recently announced that Amazon will begin testing Digit for use in their operations.

 

Meanwhile, Boston Dynamics—makers of Spot, the $75,000 robotic dog—has built another decidedly agile bipedal robot. Atlas showed it could move various obstacles earlier this year, after nailing a parkour course in 2021. Boston Dynamic’s Atlas is a research platform and not available for purchase, but the robot does show the U.S. is on par with China in terms of droid design.

Read the full article at: robbreport.com

Brain cells control how fast you eat — and when you stop

 
 

Scientists found the cells in mice — and say they could lead to a better understanding of human appetite.

 

Brain cells that control how quickly mice eat, and when they stop, have been identified. The findings, published in Nature1, could lead to a better understanding of human appetite, the researchers say.

 

Nerves in the gut, called vagal nerves, had already been shown to sense how much mice have eaten and what nutrients they have consumed2. The vagal nerves use electrical signals to pass this information to a small region in the brainstem that is thought to influence when mice, and humans, stop eating. This region, called the caudal nucleus of the solitary tract, contains prolactin-releasing hormone neurons (PRLH) and GCG neurons. But, until now, studies have involved filling the guts of anaesthetized mice with liquid food, making it unclear how these neurons regulate appetite when mice are awake.

 

To answer this question, physiologist Zachary Knight at the University of California, San Francisco, and his colleagues implanted a light sensor in the brains of mice that had been genetically modified so that the PRLH neurons released a fluorescent signal when activated by electrical signals transmitted along neurons from elsewhere in the body. Knight and his team infused a liquid food called Ensure — which contains a mixture of fat, protein, sugar, vitamins and minerals — into the guts of these mice. Over a ten-minute period, the neurons became increasingly activated as more of the food was infused. This activity peaked a few minutes after the infusion ended. By contrast, the PRLH neurons did not activate when the team infused saline solution into the mice’s guts.

 

When the team allowed the mice to freely eat liquid food, the PRLH neurons activated within seconds of the animals starting to lick the food, but deactivated when they stopped licking. This showed that PRLH neurons respond differently, depending on whether signals are coming from the mouth or the gut, and suggests that signals from the mouth override those from the gut, says Knight. By using a laser to activate PRLH neurons in mice that were eating freely, the researchers could reduce how quickly the mice ate.

 

Further experiments showed that PRLH neurons did not activate during feeding in mice that lacked most of their ability to taste sweetness, suggesting that taste activated the neurons. The researchers also found that GCG neurons are activated by signals from the gut, and control when mice stop eating. “The signals from the mouth are controlling how fast you eat, and the signals from the gut are controlling how much you eat,” says Knight.

 

“I’m extremely impressed by this paper,” says neuroscientist Chen Ran at Harvard University in Boston, Massachusetts. The work provides original insights on how taste regulates appetite, he says. The findings probably apply to humans, too, Ran adds, because these neural circuits tend to be well conserved across both species.

Read the full article at: www.nature.com