Microsoft’s new AI can simulate anyone’s voice with 3 seconds of audio | #VALL_E

On Thursday, Microsoft researchers announced a new text-to-speech AI model called VALL-E that can closely simulate a person’s voice when given a three-second audio sample. Once it learns a specific voice, VALL-E can synthesize audio of that person saying anything—and do it in a way that attempts to preserve the speaker’s emotional tone.

Its creators speculate that VALL-E could be used for high-quality text-to-speech applications, speech editing where a recording of a person could be edited and changed from a text transcript (making them say something they originally didn’t), and audio content creation when combined with other generative AI models like GPT-3.

 

Learn more / En savoir plus / Mehr erfahren:

 

https://www.scoop.it/t/21st-century-innovative-technologies-and-developments/?&tag=AI

 

https://www.scoop.it/topic/21st-century-innovative-technologies-and-developments/?&tag=Ethics

 

 

Read the full article at: arstechnica.com

Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT — Stephen Wolfram

Combining Wolfram Alpha and GPT-3 could potentially create a powerful tool that could answer complex questions and generate human-like text. It could be used to provide in-depth explanations and discussions on a wide range of topics, and could be used to engage in natural-sounding conversations.

 

Accessing Wolfram|Alpha’s computational knowledge with ChatGPT–an ideal combination of precise computation with human-like expression of ideas. Stephen Wolfram explains how.

 

It’s always amazing when things suddenly “just work”. It happened to us with Wolfram|Alpha back in 2009. It happened with our Physics Project in 2020. And it’s happening now with OpenAI’s ChatGPT. Stephen Wolfram has been tracking neural net technology for about 43 years and he finds the performance of ChatGPT thoroughly remarkable. Suddenly, there is a system that can successfully generate text about almost anything—that’s very comparable to what humans might write. It’s impressive, and useful and its success is probably going to tell us some very fundamental things about the nature of human thinking.

 

But while ChatGPT is a remarkable achievement in automating the doing of major human-like things, not everything that’s useful to do is quite so “human like”. Some of it is instead more formal and structured. And indeed one of the great achievements of our civilization over the past several centuries has been to build up the paradigms of mathematics, the exact sciences—and, most importantly, now computation—and to create a tower of capabilities quite different from what pure human-like thinking can achieve.

 

Read the full article at: writings.stephenwolfram.com

Education is about to radically change: AI for the masses – Everyone is Getting Very Smart 

Over the recent weeks, millions of people have tried the new AI chat released by OpenAI, built on an upgrade of GPT3 (Generative Pre-trained Transformer). The tool uses a neural network to generate responses from data sourced from the internet. OpenAI, supported by Microsoft, also built and released the currently free DALL-E – an AI-generated art form.

 

By creating an easy user interface, the ChatGPT likely has many educators wondering about the future of learning. This platform will be rapidly improved when next-generation GPT4 models emerge, most likely early 2023 – meaning, it’s only going to get even better, much, much better.

 

AI already does and will continue to impact education – along with every other sector. Innovative education leaders have an opportunity (along with parallel emerging innovations in Web3) to build the foundation for the most personalized learning system we have ever seen. Using these tools, educators can design an equitable and efficient model for every learner to find purpose and agency in their lives – and the opportunity to help solve some of the world’s most pressing challenges.

Read the full article at: www.gettingsmart.com

ChatGPT’s new features: Prompt Palettes

Created by AI

 

https://twitter.com/debarghya_das/status/1610470866713972737

Smart summary:
The thread discusses the upcoming “Prompt Palettes” feature for ChatGPT, which will provide users with pre-written text prompts to help with tasks like formatting raw text, summarizing text, and serving as a programming assistant. The feature will use OpenAI’s GPT-3 Codex models, and is similar to a “brushes” feature being developed by Github Copilot Labs.
——-

ChatGPT will soon drop a new feature – Prompt Palettes!

They’re pre-written text prompts to perform tasks like
– format raw text to markdown
– summarize text
– be a programming assistant
– add text from a link as context

How it works and EXACT prompts are pre-written bits of text that accentuate a user’s input for a specific task.

It’s like a magic button for text explained only by pre-written instructions. Here are the prompts to “format” and “summarize”. You should be able to add your own too.

— ChatGPT Coding Assistant —

is slightly different from Prompt Palettes but seems like a pre-written addendum to a query to serve as a one-shot way to focus on a specific vertical task.

This might be forcing the use of GPT-3: Codex explicitly– https://t.co/GddMvqG3pU

Prompt palettes act on a specific message, and ChatGPT is adding a nifty “Add text from link” feature which will allow you to, say, summarize websites easily.

Prompt palettes aren’t new! Github Copilot Labs has been working on a similar magic “brushes” feature that integrates directly into VS Code. They use OpenAI’s GPT-3 Codex models too.

https://t.co/8gOp9L17Bx

Prompt Palettes will bring these powerful new LLM features to the mainstream of 1 million users! 2023 has just begun for AI.

https://t.co/04GwCmOQr8

Thanks to @eeeziii for the idea of reverse-engineering ChatGPT (he did this too)!

ChatGPT is a minified React app with chunked JS that uses Server Sent Events with /conversations to stream the meat of the output. It uses text-davinci-002-render with 4097 max tokens.

Read the full article at: mem.ai

Finding Language in the Brain

Psycholinguist Giosuè Baggio sheds light on the thrilling, evolving field of neurolinguistics, where neuroscience and linguistics meet.

 

What exactly is language? At first thought, it’s a continuous flow of sounds we hear, sounds we make, scribbles on paper or on a screen, movements of our hands, and expressions on our faces. But if we pause for a moment, we find that behind this rich experiential display is something different: the smaller and larger building blocks of a Lego-like game of construction, with parts of words, words, phrases, sentences, and larger structures still.

 

We can choose the pieces and put them together with some freedom, but not anything goes. There are rules, constraints. And no half measures. Either a sound is used in a word, or it’s not; either a word is used in a sentence, or it’s not. But unlike Lego, language is abstract: Eventually, one runs out of Lego bricks, whereas there could be no shortage of the sound b, and no cap on reusing the word “beautiful” in as many utterances as there are beautiful things to talk about.

Read the full article at: thereader.mitpress.mit.edu