Yuga Labs, the creator of the Bored Ape Yacht Club (BAYC) NFT, has announced it is raising $450 million at a $4 billion valuation. The NFT company said the funds would be used to scale its team and operations, which it confirmed includes a new metaverse project called Otherside.

The metaverse project, Otherside, will integrate avatars from a number of NFT projects, including BAYC. Earlier this week, Yuga Labs tweeted a teaser that a metaverse project powered by its new ape-themed crypto coin, Ape Coin was in the works.

Among the investors are Andreessen Horowitz along with gaming studio Animoca Brands and cryptofirms FTX which are both already associated with YugaLabs.

Google, Samsung, Adidas as well as celebrities such as Colin Kaepernick, Shaquille O’ Neal, Steve Aoki, and Timbaland among others also invested in the funding round.

 

Learn more / En savoir plus / Mehr erfahren:

 

https://www.scoop.it/topic/21st-century-innovative-technologies-and-developments/?&tag=Metaverse

 

https://www.scoop.it/t/securite-pc-et-internet/?&tag=crypto-currency

 

https://www.scoop.it/topic/21st-century-innovative-technologies-and-developments/?&tag=Coinbase

 

https://www.scoop.it/topic/21st-century-innovative-technologies-and-developments/?&tag=NFT

 

Read the full article at: www.zdnet.com

Atomic clocks are the best sensors mankind has ever built. Today, they can be found in national standards institutes or satellites of navigation systems. Scientists all over the world are working to further optimize the precision of these clocks. Now, a research group led by Peter Zoller, a theorist from Innsbruck, Austria, has developed a new concept that can be used to operate sensors with even greater precision irrespective of which technical platform is used to make the sensor. “We answer the question of how precise a sensor can be with existing control capabilities, and give a recipe for how this can be achieved,” explain Denis Vasilyev and Raphael Kaubrügger from Peter Zoller’s group at the Institute of Quantum Optics and Quantum Information at the Austrian Academy of Sciences in Innsbruck.

 

For this purpose, the physicists use a method from quantum information processing: Variational quantum algorithms describe a circuit of quantum gates that depends on free parameters. Through optimization routines, the sensor autonomously finds the best settings for an optimal result. “We applied this technique to a problem from metrology—the science of measurement,” Vasilyev and Kaubrügger explain. “This is exciting because historically advances in atomic physics were motivated by metrology, and in turn quantum information processing emerged from that. So, we’ve come full circle here,” Peter Zoller says. With the new approach, scientists can optimize quantum sensors to the point where they achieve the best possible precision technically permissible.

 

Better measurements with little extra effort

For some time, it has been understood that atomic clocks could run even more accurately by exploiting quantum mechanical entanglement. However, there has been a lack of methods to realize robust entanglement for such applications. The Innsbruck physicists are now using tailor-made entanglement that is precisely tuned to real-world requirements. With their method, they generate exactly the combination consisting of quantum state and measurements that is optimal for each individual quantum sensor. This allows the precision of the sensor to be brought close to the optimum possible according to the laws of nature, with only a slight increase in overhead. “In the development of quantum computers, we have learned to create tailored entangled states,” says Christian Marciniak from the Department of Experimental Physics at the University of Innsbruck. “We are now using this knowledge to build better sensors.”

 

Demonstrating quantum advantage with sensors

This theoretical concept was now implemented in practice for the first time at the University of Innsbruck, as the research group led by Thomas Monz and Rainer Blatt now reported in Nature. The physicists performed frequency measurements based on variational quantum calculations on their ion trap quantum computer. Because the interactions used in linear ion traps are still relatively easy to simulate on classical computers, the theory colleagues were able to check the necessary parameters on a supercomputer at the University of Innsbruck. Although the experimental setup is by no means perfect, the results agree surprisingly well with the theoretically predicted values. Since such simulations are not feasible for all sensors, the scientists demonstrated a second approach: They used methods to automatically optimize the parameters without prior knowledge. “Similar to machine learning, the programmable quantum computer finds its optimal mode autonomously as a high-precision sensor,” says experimental physicist Thomas Feldker, describing the underlying mechanism.

Read the full article at: phys.org

Using data from ESA’s Gaia mission, astronomers have shown that a part of the Milky Way known as the ‘thick disc’ began forming 13 billion years ago, around 2 billion years earlier than expected, and just 0.8 billion years after the Big Bang.

 

This surprising result comes from an analysis performed by Maosheng Xiang and Hans-Walter Rix, from the Max-Planck Institute for Astronomy, Heidelberg, Germany. They took brightness and positional data from Gaia’s Early Data Release 3 (EDR3) dataset and combined it with measurements of the stars’ chemical compositions, as given by data from China’s Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) for roughly 250 000 stars to derive their ages.

 

They chose to look at sub giant stars. In these stars, energy has stopped being generated in the star’s core and has moved into a shell around the core. The star itself is transforming into a red giant star. Because the sub giant phase is a relatively brief evolutionary phase in a star’s life, it permits its age to be determined with great accuracy, but it’s still a tricky calculation.

 

How old are the stars?

The age of a star is one of the most difficult parameters to determine. It cannot be measured directly but must be inferred by comparing a star’s characteristics with computer models of stellar evolution. The compositional data helps with this. The Universe was born with almost exclusively hydrogen and helium. The other chemical elements, known collectively as metals to astronomers, are made inside stars, and exploded back into space at the end of a star’s life, where they can be incorporated into the next generation of stars. So, older stars have fewer metals and are said to have lower metallicity.

 

The LAMOST data gives the metallicity. Together, the brightness and metallicity allow astronomers to extract the star’s age from the computer models. Before Gaia, astronomers were routinely working with uncertainties of 20-40 percent, which could result in the determined ages being imprecise by a billion years or more.

Gaia’s EDR3 data release changes this. “With Gaia’s brightness data, we are able to determine the age of a sub giant star to a few percent,” says Maosheng. Armed with precise ages for a quarter of a million sub giant stars spread throughout the galaxy, Maosheng and Hans-Walter began the analysis.

Read the full article at: phys.org

Everything you can see, and everything you could possibly see, right now, assuming your eyes could detect all types of radiations around you — is the observable universe. In light, the farthest we can see comes from the cosmic microwave background, a time 13.8 billion years ago when the universe was opaque like thick fog. Some neutrinos and gravitational waves that surround us come from even farther out, but humanity does not yet have the technology to detect them.

 

The featured image illustrates the observable universe on an increasingly compact scale, with the Earth and Sun at the center surrounded by our Solar Systemnearby starsnearby galaxiesdistant galaxiesfilaments of early matter, and the cosmic microwave background. Cosmologists typically assume that our observable universe is just the nearby part of a greater entity known as “the universe” where the same physics applies. However, there are several lines of popular but speculative reasoning that assert that even our universe is part of a greater multiverse where either different physical constants occur, different physical laws apply, higher dimensions operate, or slightly different-by-chance versions of our standard universe exist.

Read the full article at: apod.nasa.gov

Blockchain applications go far beyond cryptocurrency and bitcoin. With its ability to create more transparency and fairness while also saving businesses time and money, the technology is impacting a variety of sectors in ways that range from how contracts are enforced to making government work more efficiently. 

We’ve rounded up 34 examples of real-world blockchain use cases for this pragmatic yet revolutionary technology. It’s far from an exhaustive list, but they’re already changing how we do business.

Read the full article at: builtin.com

The critical concern for any metaverse beginner would pertain to the ways in which metaverse can change education. How will the metaverse change education? The big question brings many doubts to mind alongside emphasizing particularly the specific parts of the metaverse, which might play a role here.

A broader classification of the metaverse would bring you face to face with four important concepts such as augmented reality, lifelogging, virtual reality, and mirror worlds. All of these concepts provide a detailed impression of using metaverse technology to drive use cases in the education sector. Let us learn about these four aspects of the metaverse with a brief overview of each of them.

 

Learn more / En savoir plus / Mehr erfahren:

 

https://www.scoop.it/topic/21st-century-innovative-technologies-and-developments/?&tag=Metaverse

 

https://www.scoop.it/t/securite-pc-et-internet/?&tag=crypto-currency

 

https://www.scoop.it/topic/21st-century-innovative-technologies-and-developments/?&tag=Coinbase

 

https://www.scoop.it/topic/21st-century-innovative-technologies-and-developments/?&tag=NFT

 

Read the full article at: 101blockchains.com

Unity spun that idea into an arm of its business and is now leveraging its game engine technology to help clients make “digital twins” of real-life objects, environments, and, recently, people. “The real world is so freaking limited,” said Danny Lange, Unity’s senior vice president of artificial intelligence, in Unity’s San Francisco headquarters last October. Speaking with WIRED in 2020, he had told me, “In a synthetic world, you can basically re-create a world that is better than the real world for training systems. And I can create many more scenarios with that data in Unity.”

 

Learn more / En savoir plus / Mehr erfahren:

 

https://www.scoop.it/t/21st-century-innovative-technologies-and-developments/?&tag=VR

 

http://www.scoop.it/t/21st-century-learning-and-teaching/?&tag=Virtual+Reality

 

Read the full article at: www.wired.com

Synthetically generated faces are not just highly photorealistic, they are nearly indistinguishable from real faces…. Perhaps most interestingly, we find that synthetically generated faces are more trustworthy than real faces. This may be because synthesized faces tend to look more like average faces which themselves are deemed more trustworthy

 

Regardless of the underlying reason, synthetically generated faces have emerged on the other side of the uncanny valley. This should be considered a success for the fields of computer graphics and vision. At the same time, easy access to such high-quality fake imagery has led and will continue to lead to various problems, including more convincing online fake profiles and—as synthetic audio and video generation continues to improve—problems of nonconsensual intimate imagery, fraud, and disinformation campaigns, with serious implications for individuals, societies, and democracies.

Read the full article at: geneticliteracyproject.org

A nuclear war would quickly bring cataclysmic climate change. A recent scientific paper, in sync with countless studies, concludes that in the aftermath of nuclear weapons blasts in cities, “smoke would effectively block out sunlight, causing below-freezing temperatures to engulf the world.” Researchers estimate such conditions would last for 10 years. The Federation of American Scientists predicts that “a nuclear winter would cause most humans and large animals to die from nuclear famine in a mass extinction event similar to the one that wiped out the dinosaurs.”

 

While there’s a widespread myth that the danger of nuclear war has diminished, this illusion is not the only reason why the climate movement has failed to include prevention of nuclear winter on its to-do list. Notably, the movement’s organizations rarely even mention nuclear winter. Another factor is the view that unlike climate change, which is already happening and could be exacerbated or mitigated by policies in the years ahead, nuclear war will either happen or it won’t. That might seem like matter-of-fact realism, but it’s more like thinly disguised passivity wrapped up in fatalism.  In the concluding chapter of his 2017 book “The Doomsday Machine,” Daniel Ellsberg warns: “The threat of full nuclear winter is posed by the possibility of all-out war between the United States and Russia. … The danger that either a false alarm or a terrorist attack on Washington or Moscow would lead to a preemptive attack derives almost entirely from the existence on both sides of land-based missile forces, each vulnerable to attack by the other: each, therefore, kept on a high state of alert, ready to launch within minutes of warning.” 

 

Ellsberg adds that “the easiest and fastest way to reduce that risk — and indeed, the overall danger of nuclear war — is to dismantle entirely” the Minuteman III missile force of ICBMs comprising the land-based portion of U.S. nuclear weaponry.

 

A recent issue of The Nation includes an article that Ellsberg and I wrote to emphasize the importance of shutting down all ICBMs.

Here are some of its key points:

  • “Four hundred ICBMs now dot the rural landscapes of Colorado, Montana, Nebraska, North Dakota and Wyoming. Loaded in silos, those missiles are uniquely — and dangerously — on hair-trigger alert. Unlike the nuclear weapons on submarines or bombers, the land-based missiles are vulnerable to attack and could present the commander in chief with a sudden use-them-or-lose-them choice.”
  • Former Defense Secretary William Perry wrote five years ago: “First and foremost, the United States can safely phase out its land-based intercontinental ballistic missile (ICBM) force, a key facet of Cold War nuclear policy. Retiring the ICBMs would save considerable costs, but it isn’t only budgets that would benefit. These missiles are some of the most dangerous weapons in the world. They could even trigger an accidental nuclear war.”
  • “Contrary to uninformed assumptions, discarding all ICBMs could be accomplished unilaterally by the United States with no downsides. Even if Russia chose not to follow suit, dismantling the potentially cataclysmic land-based missiles would make the world safer for everyone on the planet.”
  • Frank von Hippel, a former chairman of the Federation of American Scientists who is co-founder of Princeton’s Program on Science and Global Security, wrote this year: “Strategic Command could get rid of launch on warning and the ICBMs at the same time. Eliminating launch on warning would significantly reduce the probability of blundering into a civilization-ending nuclear war by mistake. To err is human. To start a nuclear war would be unforgivable.”
  • “Better sooner than later, members of Congress will need to face up to the horrendous realities about intercontinental ballistic missiles. They won’t do that unless peace, arms-control and disarmament groups go far beyond the current limits of congressional discourse — and start emphasizing, on Capitol Hill and at the grassroots, the crucial truth about ICBMs and the imperative of eliminating them all.”

 

At the same time that the atmospheric levels of greenhouse gases have continued to increase, so have the dangers of nuclear war. No imperatives are more crucial than challenging the fossil fuel industry and the nuclear weapons industry as the terrible threats to the climate and humanity that they are.

Read the full article at: www.salon.com

For the first time, MIT neuroscientists have identified a population of neurons in the human brain that lights up when we hear singing, but not other types of music. These neurons, found in the auditory cortex, appear to respond to the specific combination of voice and music, but not to either regular speech or instrumental music. Exactly what they are doing is unknown and will require more work to uncover, the researchers say.

 

“The work provides evidence for relatively fine-grained segregation of function within the auditory cortex, in a way that aligns with an intuitive distinction within music,” says Sam Norman-Haignere, a former MIT postdoc who is now an assistant professor of neuroscience at the University of Rochester Medical Center.

 

The work builds on a 2015 study in which the same research team used functional magnetic resonance imaging (fMRI) to identify a population of neurons in the brain’s auditory cortex that responds specifically to music. In the new work, the researchers used recordings of electrical activity taken at the surface of the brain, which gave them much more precise information than fMRI.

 

“There’s one population of neurons that responds to singing, and then very nearby is another population of neurons that responds broadly to lots of music. At the scale of fMRI, they’re so close that you can’t disentangle them, but with intracranial recordings, we get additional resolution, and that’s what we believe allowed us to pick them apart,” says Norman-Haignere.

 

Norman-Haignere is the lead author of the study, which appears today in the journal Current Biology. Josh McDermott, an associate professor of brain and cognitive sciences, and Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, both members of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds and Machines (CBMM), are the senior authors of the study.

Read the full article at: news.mit.edu