Quantum Material Exhibits “Non-Local” Behavior That Mimics Brain Function

Creating brain-like computers with minimal energy requirements would revolutionize nearly every aspect of modern life. Funded by the Department of Energy, Quantum Materials for Energy Efficient Neuromorphic Computing (Q-MEEN-C) — a nationwide consortium led by the University of California San Diego — has been at the forefront of this research. 


UC San Diego Assistant Professor of Physics Alex Frañó is co-director of Q-MEEN-C and thinks of the center’s work in phases. In the first phase, he worked closely with President Emeritus of University of California and Professor of Physics Robert Dynes, as well as Rutgers Professor of Engineering Shriram Ramanathan. Together, their teams were successful in finding ways to create or mimic the properties of a single brain element (such as a neuron or synapse) in a quantum material.


Now, in phase two, new research from Q-MEEN-C, published in Nano Letters, shows that electrical stimuli passed between neighboring electrodes can also affect non-neighboring electrodes. Known as non-locality, this discovery is a crucial milestone in the journey toward new types of devices that mimic brain functions known as neuromorphic computing.


Read the full article at: today.ucsd.edu

AI Moves Into Top 1% for Original Creative Thinking When Tested


New research from UM and its partners suggests artificial intelligence can match the top 1% of human thinkers on a standard test for creativity.


The study was directed by Dr. Erik Guzik, an assistant clinical professor in UM’s College of Business. He and his partners used the Torrance Tests of Creative Thinking, a well-known tool used for decades to assess human creativity. The researchers submitted eight responses generated by ChatGPT, the application powered by the GPT-4 artificial intelligence engine. They also submitted answers from a control group of 24 UM students taking Guzik’s entrepreneurship and personal finance classes. These scores were compared with 2,700 college students nationally who took the TTCT in 2016. All submissions were scored by Scholastic Testing Service, which didn’t know AI was involved.


The results placed ChatGPT in elite company for creativity. The AI application was in the top percentile for fluency — the ability to generate a large volume of ideas — and for originality — the ability to come up with new ideas. The AI slipped a bit — to the 97th percentile — for flexibility, the ability to generate different types and categories of ideas. “For ChatGPT and GPT-4, we showed for the first time that it performs in the top 1% for originality,” Guzik said. “That was new.” He was gratified to note that some of his UM students also performed in the top 1%. However, ChatGTP outperformed the vast majority of college students nationally.


Guzik tested the AI and his students during spring semester. He was assisted in the work by Christian Gilde of UM Western and Christian Byrge of Vilnius University. The researchers presented their work in May at the Southern Oregon University Creativity Conference.

“We were very careful at the conference to not interpret the data very much,” Guzik said. “We just presented the results. But we shared strong evidence that AI seems to be developing creative ability on par with or even exceeding human ability.”


Guzik said he asked ChatGPT what it would indicate if it performed well on the TTCT. The AI gave a strong answer, which they shared at the conference: “ChatGPT told us we may not fully understand human creativity, which I believe is correct,” he said. “It also suggested we may need more sophisticated assessment tools that can differentiate between human and AI-generated ideas.”

He said the TTCT is protected proprietary material, so ChatGPT couldn’t “cheat” by accessing information about the test on the internet or in a public database.


Guzik has long been interested in creativity. As a seventh grader growing up in the small town of Palmer, Massachusetts, he was in a program for talented-and-gifted students. That experience introduced him to the Future Problem Solving process developed by Ellis Paul Torrance, the pioneering psychologist who also created the TTCT. Guzik said he fell in love with brainstorming at that time and how it taps into human imagination, and he remains active with the Future Problem Solving organization — even meeting his wife at one of its conferences.

Read the full article at: www.umt.edu

Scientists Revived a 46,000-Year-Old Nematode from Siberian Permafrost


The animal is a previously unknown species that may help researchers unlock secrets of surviving harsh environments. A female microscopic roundworm that spent the last 46,000 years in suspended animation deep in the Siberian permafrost was revived and started having babies in a laboratory dish. By sequencing the genome of this Rip Van Winkle roundworm, scientists revealed it to be a new species of nematode, which is described in a study published Thursday in the journal PLOS Genetics.


Nematodes today are among the most ubiquitous organisms on Earth, inhabiting the soil, the water and the ocean floor. “The vast majority of nematode species have not been described,” William Crow, a nematologist at the University of Florida who was not involved in the study, wrote in an email. The ancient Siberian worm could be a species that has since gone extinct, he said. “However, it very well could be a commonly occurring nematode that no one got around to describing yet.”


Published in Plos Genetics  (July 27, 2023):


Read the full article at: www.washingtonpost.com

Can AI Pass College Exams? GPT-4 Receives an Average GPA of 3.57 at Harvard

Maya Bodnick, a student at Harvard University, conducted an experiment to determine if GPT-4, an AI language model, was capable of passing college exams in the humanities and social sciences. Bodnick had GPT-4 write seven essays on topics such as economic concepts, presidentialism in Latin America, and a literary analysis. She then submitted the essays to professors for grading, without revealing whether they were written by herself or GPT-4.

The results were impressive, with GPT-4 receiving grades ranging from A to B-. It achieved an average GPA of 3.57, which Bodnick called “respectable.” She noted that GPT-4 wrote all the essays based on the given prompts.


Bodnick disclosed that she submitted GPT-4’s answers without any edits, but with two exceptions. First, she combined multiple responses to meet the word count requirement, as GPT-4 could only generate up to 750 words at a time. Second, she asked the evaluators to disregard any missing citations that GPT-4 couldn’t provide, which would normally be essential for academic work.

The evaluators praised GPT-4’s essays, commenting on their clarity, well-structured arguments, and detailed analysis. Only one essay received a lower grade, with the evaluator noting a flowery writing style and a lack of consideration for positive aspects of presidentialism and economic factors.


Bodnick believes that GPT-4’s performance suggests AI-generated essays could receive favorable grades at most universities, though possibly lower at institutions such as Princeton or UC Berkeley. She suggests that AI technology could revolutionize the teaching and learning methods in the humanities and social sciences. Cheating on essays has become easier, and with advancements in technology, including citation capabilities, it may become even more challenging to detect plagiarism.


As for the future, Bodnick questions the effectiveness of text detection systems, as OpenAI’s detector was withdrawn due to accuracy issues. She suggests that exams should involve personal conversations to mitigate concerns about AI writing essays. The fact that GPT-4 passed her exams hints at potential developments in Bodnick’s future field.professional field.

Read the full article at: ts2.space

Lights out? 61% of Americans think AI could spell the end of Humanity


Are we on the brink of an AI apocalypse? According to a recent survey, most U.S. citizens share Elon Musk’s concerns about the potential threat artificial intelligence poses to humanity’s future. 


What a poll shows: A majority of Americans, 61% to be exact, believe that the fast-paced growth of AI could endanger the future of humanity and over two-thirds expressed concerns about its potential negative impacts, reported Reuters, citing a survey conducted by Ipsos.  As per the findings, the proportion of U.S. citizens who anticipate negative consequences from AI is three times higher than those who don’t, with 61% of the 4,415 adults surveyed expressing concerns over the potential hazard of AI and only 22% disagreeing. Rest 17% of the people were uncertain. 


The aforementioned online survey included 4,415 U.S. adults and has a credible interval with a margin of error of plus or minus two percentage points, the report stated. 


Why It’s Important: Landon Klein, director of U.S. policy of the Future of Life Institute, which is behind the “open letter” demanding a six-month pause in AI research “more powerful” than OpenAI’s GPT-4, said that the poll’s findings show that “a broad swath of Americans worry about the negative effects of AI,” the report noted.  “We view the current moment similar to the beginning of the nuclear era, and we have the benefit of public perception that is consistent with the need to take action.”


Musk, who co-founded OpenAI in 2015, along with Apple co-founder Steve Wozniak and over 1000 others, signed an open letter. Although Musk’s intention behind signing the letter has been questioned, considering the tech billionaire’s plans to launch his own chatGPT-rival called “TruthGPT.” 


Benzinga research has found that the exponential growth of OpenAI’s chatGPT has made AI a ubiquitous part of everyday life, leading to a surge of interest in the field and sparking an AI arms race between tech giants like Microsoft Corporation and Alphabet Inc., eager to showcase their own AI breakthroughs. 


In May 2023, Geoffrey Hinton, who recently left his job at Google citing the need to talk more freely about the risks posed by AI, stated that risks posed by AI to humanity could be more pressing than those of climate change. However, others like the godfather of virtual reality, Jaron Lanier, Bill Gates and Jürgen Schmidhuber have a different view and disagree with the sentiment.  

Read the full article at: ktvz.com

CERN – Preparing for a quantum leap: Researchers chart future for use of quantum computing in particle physics


Recently, researchers have published an important white paper identifying activities in particle physics where burgeoning quantum-computing technologies could be applied. The paper, authored by experts from CERN, DESY, IBM Quantum and over 30 other organisations, is now available on ArXiv.


With quantum-computing technologies rapidly improving, the paper sets out where these could be applied within particle physics, in order to help tackle computing challenges related not only to the Large Hadron Collider’s ambitious upgrade program, but also to other colliders and low energy experiments world-wide.

The paper was produced by a working group set up at the first-of-its-kind “QT4HEP” conference, held at CERN last November. Over the last eight months, the 46 people in this working group have worked hard to identify areas where quantum-computing technologies could provide a significant boon.


The areas identified relate to both theoretical and experimental particle physics. The paper then maps these areas to “problem formulations” in quantum computing. This is an important step in ensuring that the particle physics community is well positioned to benefit from the massive potential of breakthrough new quantum computers when they come online.


“Quantum computing is very promising, but not every problem in particle physics is suited to this mode of computing,” says Alberto Di Meglio, head of the CERN Quantum Technology Initiative (CERN QTI). “It’s important to ensure that we are ready and that we can accurately identify the areas where these technologies have the potential to be most useful for our community.”


Read the full article at: quantum.cern

Google’s Brain2Music: Reconstructing Music from Human Brain Activity


The process of reconstructing experiences from human brain activity offers a unique lens into how the brain interprets and represents the world. Recently, the Google team and international collaborators introduced a method for reconstructing music from brain activity alone, captured using functional magnetic resonance imaging (fMRI). This approach uses either music retrieval or the MusicLM music generation model conditioned on embeddings derived from fMRI data. The generated music resembles the musical stimuli that human subjects experienced, with respect to semantic properties like genre, instrumentation, and mood. The scientists investigate the relationship between different components of MusicLM and brain activity through a voxel-wise encoding modeling analysis. Furthermore, they analyze which brain regions represent information derived from purely textual descriptions of music stimuli.

Read the full article at: google-research.github.io

Claude 2 released

Claude.ai is pleased to announce Claude 2, their newest model, which can be accessed via API as well as a new public-facing beta website at claude.ai.


They have been iterating to improve the underlying safety of Claude 2, so that it is more harmless and harder to prompt to produce offensive or dangerous output. The programmers have an internal red-teaming evaluation that scores our models on a large representative set of harmful prompts, using an automated test while we also regularly check the results manually. In this evaluation, Claude 2 was 2x better at giving harmless responses compared to Claude 1.3. Although no model is immune from jailbreaks, a variety of safety techniques (which you can read about here and here) have been employed, as well as extensive red-teaming, to improve its outputs.


Claude 2 is generally available in the US and UK. Claude.ai is working to make Claude more globally available in the coming months. Interested users can now create an account and start talking to Claude in natural language, asking it for help with any tasks. Talking to an AI assistant can take some trial and error, so read up on our tips to get the most out of Claude.


Claude.ai is also currently working with thousands of businesses who are using the Claude API. One of the partners is Jasper, a generative AI platform that enables individuals and teams to scale their content strategies. They found that Claude 2 was able to go head to head with other state of the art models for a wide variety of use cases, but has particular strength for long form low latency uses. “We are really happy to be among the first to offer Claude 2 to our customers, bringing enhanced semantics, up-to-date knowledge training, improved reasoning for complex prompts, and the ability to effortlessly remix existing content with a 3X larger context window,” said Greg Larson, VP of Engineering at Jasper. “We are proud to help our customers stay ahead of the curve through partnerships like this one with Anthropic.”


Sourcegraph is a code AI platform that helps customers write, fix, and maintain code. Their coding assistant Cody uses Claude 2’s improved reasoning ability to give even more accurate answers to user queries while also passing along more codebase context with up to 100K context windows. In addition, Claude 2 was trained on more recent data, meaning it has knowledge of newer frameworks and libraries for Cody to pull from. “When it comes to AI coding, devs need fast and reliable access to context about their unique codebase and a powerful LLM with a large context window and strong general reasoning capabilities,” says Quinn Slack, CEO & Co-founder of Sourcegraph. “The slowest and most frustrating parts of the dev workflow are becoming faster and more enjoyable. Thanks to Claude 2, Cody’s helping more devs build more software that pushes the world forward.”


The programmers at Claude.ai welcome user feedback as they work to responsibly deploy Claude more broadly. The chat experience is an open beta launch, and users should be aware that Claude – like all current models – can generate inappropriate responses. AI assistants are most useful in everyday situations, like serving to summarize or organize information, and should not be used where physical or mental health and well-being are involved. Please let the people at Claude.ai know if you would like to talk to Claude in a currently unsupported area, or if you are a business who would like to start working with Claude.


After working for the past few months with key partners like Notion, Quora, and DuckDuckGo in a closed alpha, Claude.ai has been able to carefully test out our systems in the wild, and are ready to offer Claude more broadly so it can power crucial, cutting-edge use cases at scale.


Claude is a next-generation AI assistant based on Anthropic’s research into training helpful, honest, and harmless AI systems. Accessible through chat interface and API in a developer console, Claude is capable of a wide variety of conversational and text processing tasks while maintaining a high degree of reliability and predictability.

Read the full article at: www.anthropic.com

Exoplanet identified as most reflective because of metallic clouds that act like a mirror


A scorching hot world where metal clouds rain drops of titanium and other metals has been designated as the most reflective planet ever observed outside of our Solar System by astronomers.

Key points:

  • The side of exoplanet LTT9779b facing its star reaches 2,000 degrees Celsius
  • This helps it form metallic clouds that act like a shield and reflects 80 per cent of light
  • Scientists say this also prevents its atmosphere from being blown away, defying the odds

This strange world, which is more than 260 light years from Earth, reflects 80 per cent of the light from its host star, according to new observations from Europe’s exoplanet-probing Cheops space telescope. That makes it the first exoplanet comparably as shiny as Venus, which is the brightest object in our night sky other than the Moon. First discovered in 2020, the Neptune-sized planet called LTT9779b orbits its host star in just 19 hours. Because it is so close, the side of the planet facing its star is a sizzling 2,000˚C, which is considered far too hot for clouds to form. Yet LTT9779b seems to have plenty of them. “It was really a puzzle,” said Vivien Parmentier, a researcher at France’s Cote d’Azur Observatory and co-author of a new study in the journal Astronomy and Astrophysics. The researchers then “realized we should think about this cloud formation in the same way as condensation forming in a bathroom after a hot shower,” he said in a statement. Like running hot water steams up a bathroom, a scorching stream of metal and silicate — the mineral of which glass is made — oversaturated LTT9779b’s atmosphere until metallic clouds formed, he said. These clouds “act like a mirror,” reflecting away light, according to the European Space Agency’s Cheops project scientist Maximilian Guenther.

Read the full article at: www.abc.net.au