Connect with us

Science

BrainGPT: Mind-Reading Technology Turns Thoughts Into Text

Published

on

BrainGPT

Mind-reading has long been the stuff of science fiction stories. When it comes to technology, however, the line between fiction and reality is becoming blurred.

Scientists at the University of Technology Sydney have begun exploring that line. 

Recently, they’ve taken some big leaps forward with mind-reading technology… And it’s mind-blowing!

They hope to create a future where they will be able to give a voice to those who have lost theirs to medical and neurological issues. 

The new Artificial Intelligence, called DeWave or BrainGPT, has successfully predicted thoughts with a 40% accuracy.

The goal is to reach 90%.

Does Mind-Reading Technology Exist?

Despite all the recent innovations in artificial intelligence, there is no technology, so far, that can open up the mind and read its contents like a diary. There is, however, technology that can study the human brain and glean all sorts of information. This is what is meant by the term “mind-reading technology”.

Advancements in technology can tell researchers how illness or injury impacts the brain. Scientists can learn how the brain responds to certain sights or sounds. Using this information, they can use Artificial Intelligence to help them decode patterns and “read minds.”

And now with BrainGPT, these patterns can be turned into text.

Mind-Reading Technology Can Create a New Era of Communication

For the first time in history, spinal injuries are becoming something that doctors can reverse. Although it is a long healing process, scientists have made amazing progress in treating paralysis using Artificial Intelligence (AI).

Those with paralysis or stroke often communicate through eye movements or twitches. Imagine a world where mind-reading technology can allow us to begin understanding and even communicating with people in this condition.

Mind-reading technology could also be used to understand Organoid Intelligence (OI). This is an emerging field of study where scientists are creating a bio-computer using human brain cells. By gaining a better understanding of how the human brain functions and responds, scientists can begin to answer important questions as OI progresses.

  • Can brain organoids feel sensations?
  • Do they show signs of communication?
  • How do they respond to certain stimuli?
  • At what point, if any, do they begin functioning like a real brain?

What Is Mind-Reading Technology?

Any technology that investigates what’s happening in the human brain could be called mind-reading technology. 

Take Algorithms, for example. Algorithms are used in browsers to enhance the user’s experience. They determine what a user needs or would like to see based on previous activities and interests. Another would be an email program that can finish a phrase or sentence before the user does. 

These kinds of algorithms are based on observations of past behavior patterns. They are data. This data can be used to calculate the likelihood of what action or choice will follow. 

DeWave is an AI that is utilized by BrainGPT. BrainGPT is a language model that also uses vast amounts of neuroscience data to help researchers. This technology is reaching a point where it’s using data to read minds in a very real way. 

How Does Mind-Reading Technology Work?

For mind-reading technology to work, it’s first important to understand how the human brain processes language.

Words don’t exist in the human brain the way they exist in writing or speech. The brain manifests words in the form of brain waves, or electrical impulses, that ignite when a word is spoken or read. These impulses are unique and occur in a wide variety of places in the brain. 

In tests for the DeWave software, subjects wore a snug-fitting cap. The device took a reading of their brain waves as they read silently from assigned material. The DeWave technology then studied these brainwaves. Through this process, DeWave learned to associate each wave with a specific word. The result is a kind of dictionary of brainwaves that allows DeWave to interpret what a subject is thinking.

At 40% accuracy, it’s not perfect. However, the goal of 90% sounds pretty incredible. Scientists are learning more about why inaccuracies occur with DeWave, and how it can be advanced to make BrainGPT more accurate.

How does mind-reading technology work?

Why Isn’t BrainGPT Perfect?

Technologies like BrainGPT and DeWave have many Inaccuracies that are caused by several things. 

  • Reading vs. Thinking: Research teams collect data while subjects read printed words. As people read, spaces between words and certain punctuation marks signal them to take a pause. But when people think freely, such pauses don’t exist. This can make it challenging for BrainGPT to recognize words and sequences.
  • Clustering Words: The brain tends to cluster words with a similar meaning. This causes them to have very similar brainwave patterns. This forces the BrainGPT to make a choice. Sometimes it will replace a common word with something similar, but different. For example, “The author” might be interpreted as “The man”.
  • Brain Differences: Scientists are still studying how the brain processes language in many different situations. For example, people who speak English as a second language may have a very different set of neural responses than native speakers. Researchers of mind-reading technology have yet to understand the impact of various accents and other speech differences.

For now, scientists are excited that the technology has become accurate enough to interpret much of what a person is thinking. Word for word, literal thought interpretations are still a thing of the future.

However, as scientists work toward this goal, dire questions have arisen about the safety and ethics of this new technology.

How Should We Handle Mind-Reading Technology as It Evolves?

The thought of allowing technology to peer into our most private minds is frightening for a lot of people. It has both amazing potential and risk.

The technology could give a voice to people who can’t speak. There has also been speculation that such technology could be used to identify people with serious mental health problems. This could give the opportunity to provide help for those who need it at a time when it matters.

Mind-reading technology can also be used to develop a lie detector that is impossible to cheat, and even prevent crimes before they happen. 

However, this technology could have some serious downsides. It could be used to steal highly secure information. This could include passwords to computers containing financial or medical records. Results can also be misinterpreted which can lead to false judgments.

Two approaches have been suggested by researchers for dealing with the ethics of mind-reading Technology.

  • Embedded ethics: This approach would involve programming ethics into the software and hardware involved in these mind-reading devices. This ensures the technology is incapable of crossing boundaries laid out for it.
  • Adversarial ethics: This would involve the development of laws and regulations. These limits would be enforced by authorities governing the way this technology is used. 

Ethics and rights will need to be evaluated as neuroscience and Artificial Intelligence evolve. Organizations like The Neurorights Foundation are already planning for the future. They’ve identified five different areas of specific focus:

  • mental privacy, personal identity, free will, fair access to mental augmentation, and protection from bias.
How Should We Handle Mind-Reading Technology as It Evolves?

IC Inspiration

Our brains are incredibly complex and amazing libraries of information. We collect vast amounts of data throughout our lifetimes. Our brains have developed amazing ways to store and protect this information.

Language has long been associated with only two sections of our brain, both in the left hemisphere. Scientists in Berkeley, California have recently blown that theory clear out of the water (and into space).

They put people inside an MRI machine for an extended period. The people then listened to recordings of stories, while the researchers studied how their brains responded to each of the words. 

The result became an incredible word map of the human brain. The researchers found that each word caused a response in a different part of the brain. No part of the brain is excluded. Words are associated with every part of this incredible organ.

More than that, the brain actually organizes these words into categories. Words associated with math and measurement are all grouped in one area, while words related to food and drink may cluster in another.

This new and amazing understanding of the human brain combined with advances in BrainGPT could bring with it some awesome potential. It can help us to understand each other on a new and deeper level than ever before.

Joy L. Magnusson is an experienced freelance writer with a special passion for nature and the environment—topics she writes about widely in publications. Her work has been featured on Our Canada Magazine, Zooanthology, Written Tales Chapbook and more.

Science

Commercial Hypersonic Travel Can Have You Flying 13,000 Miles In 10 Minutes!

Published

on

jet plane flying overseas by way of commercial hypersonic air travel.
Commercial Hypersonic Travel Can Have You Flying 13,000 Miles In 10 Minutes!

If engineers start up a hypersonic engine at the University of Central Florida (UCF) and you’re not around to hear it, does it make a sound?

Hypersonic travel is anything that travels by at least 5x more than the speed of sound. A team of aerospace engineers at UCF have created the first stable hypersonic engine, and it can have you travelling across the world at 13,000 miles per hour!

Compared to the 575 mph a typical jet flies, commercial hypersonic travel is a first-class trade-off anybody would be willing to make.

In fact, a flight from Tampa, FL to California would take nearly 5 hours on a typical commercial jet; whereas, with a commercial hypersonic aircraft, it will only take 10 minutes.

So here’s the question: When can we expect commercial hypersonic air flights?

When we stop combusting engines and start detonating them! With a little background information, you’ll be shocked to know why.

Challenges and Limitations of Commercial Hypersonic Travel

The challenge with commercial hypersonic air travel is that maintaining combustion to keep the movement of an aircraft going in a stable way becomes difficult. The difficulty comes from both the combustion and aerodynamics that happens in such high speeds.

What Engineering Challenges Arise in Controlling and Stabilizing Hypersonic Aircraft at Such High Speeds?

Combustion is the process of burning fuel. It happens when fuel mixes with air, creating a reaction that releases energy in the form of heat. This mixture of air and fuel create combustion, and combustion is what generates the thrust needed for the movement of most vehicles.

But hypersonic vehicles are quite different. A combustion engine is not very efficient for vehicles to achieve stable hypersonic speeds. For a hypersonic aircraft to fly commercially, a detonation engine is needed.

Detonation can thrust vehicles into much higher speeds than combustion, so creating a detonation engine is important for commercial hypersonic air travel. Detonation engines were thought of as impossible for a very long time, not because you couldn’t create them, but because stabilizing them is difficult.

On one hand, detonation can greatly speed up a vehicle or aircraft, but on the other hand, both the power and the speed it creates makes stabilizing the engine even harder.

a lit candle with a cloud of smoke and a lit candle showing comparison between conventional combustion with that of hypersonic travel.
Combustion vs Detonation

How Do Aerodynamic Forces Impact the Design and Operation of Hypersonic Vehicles?

Aerodynamics relates to the motion of air around an object—in this case, an aircraft. As you can imagine, friction between an aircraft and the air it travels through generates a tremendous amount of heat. The faster the vehicle, the more heat created.

Commercial hypersonic vehicles must be able to manage the heat created at hypersonic speeds to keep from being damaged altogether.

Hypersonic aircraft do exist, but only in experimental forms such as in military application. NASA’s Hyper-X program develops some of these vehicles, one of which is the X-43A which could handle hypersonic speeds of Mach 6.8 (6.8x faster than the speed of sound).

Mach Number RangeName
1.0 MachSonicExactly the seed of sound.
1.2-5 MachSupersonicFaster than the speed of sound, characterized by shock waves.
>5.0HypersonicMore than 5x speed of sound, with extreme aerodynamic heating.
Description of Mach levels

But vehicles for commercial hypersonic air travel is still a work in progress

Engineers say that we will have these vehicles by 2050, but it may even be sooner that that. Here’s why.

Future Prospects and Developments in Hypersonic Travel

The worlds first stable hypersonic engine was created back in 2020 by a team of aerospace engineers at UCF, and they have continued to refine the technology since. This work is revolutionizing hypersonic technology in a way that had been thought of as impossible just a few years ago.

To create a stable engine for commercial hypersonic air travel, an engine must first be created that can handle detonation, but not only that, this engine must actually create more detonations while controlling.

This is because in order to achieve hypersonic speeds and then keep it at that level, there needs to be repeated detonations thrusting the vehicle forward.

The development at UCF did just that. They created a Rotating Detonation Engine (RDE) called the HyperReact.

What Technological Advancements are Driving the Development of Commercial Hypersonic Travel?

When combustion happens, a large amount of energy creates a high-pressure wave known as a shockwave. This compression creates higher pressure and temperatures which inject fuel into the air stream. This mixture of air and fuel create combustion, and combustion is what generates the thrust needed for a vehicles movement.

Rotating Detonation Engines (RDEs) are quite different. The shockwave generated from the detonation are carried to the “test” section of the HyperReact where the wave repeatedly triggers detonations faster than the speed of sound (picture Wile E. Coyote lighting up his rocket to catch up to Road Runner).

Theoretically, this engine can allow for hypersonic air travel at speeds of up to 17 Mach (17x the speed of sound).

hypersonic travel engine schematics by UCF
Schematic diagram of the experimental HyperReact prototype- University of Central Florida

Hypersonic technology with the development of the Rotating Detonating Engine will pave the way for commercial hypersonic air travel. But even before that happens, RED engines will be used for space launches and eventually space exploration.

NASA has already begun testing 3D-printed Rotating Detonating Rocket Engines (RDRE) in 2024.

How Soon Can We Expect Commercial Hypersonic Travel to Become a Reality?

Since we now have the worlds first stable hypersonic engine, the worlds first commercial hypersonic flight won’t be far off. Professor Kareem Ahmed, UCF professor and team lead of the experimental HyperReact prototype, say’s its very likely we will have commercial hypersonic travel by 2050.

Its important to note that hypersonic air flight has happened before, but only in experimental form. NASA’s X-43A aircraft flew for nearly 8,000 miles at Mach 10 levels. The difference is that the X-43A flew on scramjets and not Rotating Detonation Engines (RDEs).

Scramjets are combustion engines also capable of hypersonic speeds but, which are less efficient than Rotating Detonation Engines (RDEs) because they rely on combustion, not continuous detonation.

This makes RDE’s the better choice for commercial hypersonic travel, and it explains why NASA has been testing them for space launches.

One thing is certain:

We can shoot for the stars but that shot needs to be made here on Earth… If we can land on the moon, we’ll probably have commercial hypersonic travel soon.

Clouds spelling out UCF and jet plane flying by way of commercial hypersonic air travel

IC INSPIRATION

The first successful aviation flight took place 26 years after the first patented aviation engine was created; and the first successful spaceflight happened 35 years after the first successful rocket launch.

If the world’s first stable hypersonic engine was created in 2020, how long after until we have the world’s first Mach 5+ commercial flight?

1876-1903Nicolaus Otto developed the four-stroke combustible engine in 1876 that became the basis for the Wright brothers performing the first flight ever in 1903.
1926-1961Robert H. Goddard’s first successful rocket launch in 1926 paved way for the first human spaceflight by Yuri Gagarin in 1961
2020-2050The first stable RDE was created in 2020 and history is in the making!

Shout out to Professor Kareem Ahmed and his team at UCF. They’ve set the precedent for history in the making.

Imagine travelling overseas without the long flight and difficult hauls, or RDREs so great, they reduce costs and increase the efficiency of space travel.

When time seems to be moving fast; hypersonic speeds is something I think everyone can get behind!

Continue Reading

Motivational

3D Printed Organs Save Woman’s Life and Accidentally Pave Way for Biology-Powered Artificial Intelligence

Published

on

Women showing a heart symbol with her hands in front of 3d printed organs

A Great Advancement for 3D Printed Organs

3D printing in hospitals is nothing new, but for the first time in history, a woman received a 3D printed windpipe that became a fully functional without the need for immunosuppressants.

Immunosuppressants are used during organ transplants to keep the body from attacking the organ that it see’s as foreign. This means that the organ the woman received was organic and personalized for her, as if she had it her entire life.

This mind-blowing news shows that we are now closer than ever to being able to create full-scale, functional, and complicated 3D printed organs like a heart or lung.

But what about creating a brain?

3D Printing and Organoid Intelligence

Organoid Intelligence, or OI, is an emerging field of study that is focused on creating bio-computers by merging AI with real brain cells called organoids. Organoids are miniature and simplified versions of organs grown in a lab dish. They mimic some of the functions of fully grown organs, like brains. The idea behind OI is that by increase the cells organoids contain, they may begin to function like fully grown brains, and can then be used alongside computers to enhance Artificial Intelligence.

It turns out that the world’s first 3D printed windpipe was so successful that we are now closer than ever to creating the world first organoid intelligent bio-computer.

Here’s why.

The World’s First 3D Printed Windpipe

Transplant patients usually have to take a long course of immunosuppressants that help the body accept the organ. The body see’s the organ as foreign, and so the immune system begins to attack the new organ, which can lead to more complicated health problems.

The woman in her 50’s who received the 3D printed windpipe did so without any immunosuppressants. In just 6 months after the operation, the windpipe healed and began to form blood vessels, and of course, more cells.

The current goal of scientists in the field of Organoid Intelligence is to increase organoids from 100,000 cells to 10 million, and this begs the question:

Can 3D printing help build bio-computers by creating better organoids?

Can 3D Printing Help Build Bio-Computers?

The worlds first 3D printed windpipe shows that advances in 3D printing can create better functioning organs, and this implies that we can also create more intricate organoids to help in the field of Organoid Intelligence and eventually create bio-computers.

Its important to understand the distinction between 3D printing an organ and printing something like a tool or musical instrument.

The difference between printing an organ and printing a non-biological structure depends on the ink being used in the 3D printer.

3D printing non-organic structures will require ink that can be made from plastic, plastic alternatives like PLA, metal, and ceramics. On the other hand, 3D printed organs are made from ink called “bio-inks” that are a mixture of living cells and biocompatible substances like the ones mentioned above.

In the case of the 3D printed windpipe, the ink used was partly formed from the stem and cartilage cells collected from the woman’s own nose and ear. It was because of this bio-ink that the woman’s body did not reject the organ.

The Problem With 3D Printed Organs

Organs created with bioprinting need to function like real organs for the body to safely use them, and this does not happen right away.

The 3D printed organs need to go beyond just a printed structure and become living. They need to form tissues and cells that help create biological functionality, and forming these cells take time.

The problem with 3D bioprinting is that the ink used for the printer needs to be effective at doing this, and if it is not, the organ may not stay functional.

The ink used for the 3D-printed windpipe was made from part bio-ink and part polycaprolactone (PCL), a synthetic polyester material.

PCL is a used in the 3D ink for the purposes of maintain the structure of the windpipe, while the bio-ink is used to help the 3D printed organ to become fully biological in time so that the body can use it.

The PCL maintains the structure while the bio-ink does it’s thing.

The problem with PCL is that it is biodegradable and won’t last forever. In fact, doctors don’t expect the 3D-printed windpipe to last more than five years.

The Solution is Better Bio-ink

The 3D printed windpipe was not just made using PCL, but it contained bio-ink made from living cells too. The hope is that the living cells in the 3D printed organ—which came from the bio-ink—will assist the patient’s body in creating a fully functional windpipe to replace the PCL’s function.

If the organ begins to form cells and tissue by itself, then the function of PCL will be replaced by the biological function of the organ that is growing.

The organ becomes real!

Bio-Ink helps the 3D printed organ mimic it’s natural environment of cells and eventually become a real organ.

3D Printing Organs Will Save Lives

Every year, thousands of people need a lifesaving organ transplant. These transplants cost hundreds of thousand of dollars, and many people who need them don’t make it passed the waiting list.

3D Printing organs could give people the incredible opportunity to receive the help they need when they need it, saving thousands of lives annually, and millions of lives in the long run.

As advances are made in 3D Bioprinting, they will also be made in areas of Organoid and Artificial Intelligence, which shows that the progress being made in one place will once again shine its way to another.

3d printed organ. A brain being created by 3d printers.

IC Inspiration:

If we can create better forms of bio-ink and produce fully functional organs using 3D printing, we will fundamentally change the entire health care system.

17 people die every single day waiting for an organ transplant, many of whom can’t afford the transplant in the first place.

The biggest hope in the world for everyone that is affected by this is that organs can be produced when they are needed, ending the transplant shortage and saving the incredible lives of millions of people in the future.

We have seen from this story that personalized organs made from a patients own cells can stop the bodies rejection of organs. This shows us that there will come a time when there will be no need for immunosuppressants therapy.

Even more amazing is that doctors use 3D printing to practice performing a surgery so that they can sharpen their skills before the surgery. This also helps them find better pathways for performing the surgery.

Think about it… If you can’t use a real organ to practice on, then 3D organs are the next best thing.

The production of organs, the irrelevancy of immunosuppressants, and more efficient surgery will eventually drive down the prices of transplants, and 3D printing organs in the future will not only save lives, but it will also increase the quality of those lives afterwards.

That is the sort of world we can create. It’s amazing to think of all the good that is being done right here, right now.

Continue Reading

Science

Sora AI is Every Content Creators Dream. Its Almost Here!

Published

on

OpenAI’s Sora

Sora is the Japanese word for sky, our blue expanse that is often associated with limitless dreams and possibilities.

You may have heard that OpenAI is also releasing an AI video generator called Sora AI. With It’s fantastical visuals and life-like video, it’s without a doubt the top 5 AI technologies in 2024.

OpenAI recently launched Sora’s first short video, “Air Head”, and if it proves anything, its that Sora is every content creator’s dream turned reality.

But if you’re not convinced, perhaps this video might help. Here’s a little game called, “can you spot the AI video”?

How Can Sora AI Help Content Creators?

Video producers, film makers, animator, visual artists, and game developers all have one thing in common: They are always looking for the next big thing in creative expression. Sora AI is a tool that can greatly enhance the ability content creators have to fuel their imagination and connect with their audiences.

A misconception is that AI is going to replace human artists, videographers, and animators. But if Sora’s first short film has shown anything, its that a team was still needed to create the story, narrate the scenes, and edit the videos to create the final production.

Sora won’t replace artists; it will equip them with tools to express their artistry in different ways.

Sora’s First Short Film

Auteur and Toronto-based multimedia company, Shy Kids, was granted early access to Sora AI. Shy Kids is among the few granted early access to the AI video generator for the sake of testing and refining it before launch. The video the artists generated using Sora AI is called “Air Head”.

Pretty mind-blowing to think that one day, we might be able to create an entire movie with the main character as a balloon. Think of the comedies we can create.

How Does Sora AI Work?

Sora’s first short film “Air Head” shows that Sora AI is the most advanced AI-powered video generator tool in history. Sora creates realistic and detailed 60-second videos of any topic, realistic or fantasy. It only needs a prompt from the user to build on existing information and develop whole new worlds.

What We Know So Far

Sora AI is a new technology with limited access. There’s a strategic reason to limit information of a new technology, and it’s to manage the publics expectations while polishing the final product. Sora is a very powerful tool. It might be necessary to have strong safeguards and build guidelines before releasing it. Here’s what we know so far.

Sora Release Date

OpenAI has not provided any specific release date for public availability or even a waiting list. However, many sources indicate that it may be released in the second half of 2024. Currently, Sora AI is only being made available to testers called “red teamers”, and a select group of designers—like Shy Kids— have been granted access.

Sora Price

Open AI has not yet released a price for Sora AI and has made no comment on whether there will be a free version like its other AI models. Based on other AI text-to-video generators, its likely that there won’t be a free version, and that Sora will offer a tiered subscription model that caters to users who want to dish out videos regularly.

There is also a possibility of a credit-based system, similar to its competitor RunwayML. A credit-based system is where users purchase credits, and each credit is used for a specific task related to generating a video.

Sora’s Video Length

OpenAI has said Sora can generate videos of up to a minute long with visual consistency. Scientific America states that users will be able to increase the length of the video by adding additional clips to a sequence. Sora’s first short film “Air Head” ran for a minute and twenty seconds, which indicates that Sora’s video length can be anywhere between 60-90 seconds.

Sora’s Video Generation Time

OpenAI has not revealed how long it will take Sora AI to generate a video; however, Sora will use NVIDIA H100 AI GPUs. These are GPUs designed to handle complex artificial intelligence tasks. According to estimates provided by Factorial Funds, these GPUs will allow Open AI’s Sora to create a one minute video in approximately twelve minutes.

How is Sora AI Different from Other Video Generators?

Many text-to-video generators have trouble maintaining visual coherency. They will often add visuals that are completely different for one another for each scene. This requires the videos to be further edited. In some cases, it takes longer to create the video you want by using AI than it does by creating it yourself.

Sora AI seems to surpass other text-to-video generators in the level of detail and realism it creates. It has a deeper understanding of how the physical world operates.

It Brings Motion to Pictures

Another feature that Sora AI has is its still-life photo prompts. Sora will be able to take a still-life photo, such as a portrait, and bring it to life by adding realistic movement and expression to the subject. This means that you can generate images using OpenAI’s DALL. E model, and then prompt it with the desired text of what you would like the image to do.

This is like something out of Harry Potter. One of the biggest worries is that Sora AI will be able to depict someone saying or doing something they never did. I don’t think the world’s ready for another Elon Musk Deepfake.

Will Sora AI Undermine Our Trust In Videos?

There are over 700 AI-managed fake news sites across the world. OpenAI is already working with red teamers—experts in areas of false content—to help prevent the use of Sora AI in a way that can undermine our trust in videos.

Detection classifiers will play a big role in the future of AI. Among these detection classifiers are tools that can detect AI in writing, and content credentials that show whether an image was made using AI within the contents metadata.

AI image generators like Adobe Firefly are already using content credentials for their images.

Why do Sora AI Videos Look So Good?

Sora AI generates it’s videos using ‘spacetime patches’. Spacetime patches are small segments of video that allow Sora to analyze complex visual information by capturing both appearance and movement in an effective way. This creates a more realistic and dynamic video, as opposed to other AI video generators that have fixed-size inputs and outputs.

One comment said Sora AI videos are like dreams, only clearer… That’s not a bad way to put it. Afterall, dreams are like movies our brains create, and anyone who increases their REM sleep will understand. But speaking of movies, how will Sora AI affect Hollywood?

Can Sora AI Replace Movies?

As amazing as OpenAI’s text-to-video generator is, it can’t replace actors and use them in a prolonged storyline, but it can help producers create some fantastic movies. Sora AI can be used to create pre-visuals, concept art, and help producers scout potential locations.

Pre-visualization: Sora can turn scripts into visual concepts to help both directors and actors plan complex shots.

Concept Art Creation: It can be used to generate unique characters and fantastical landscapes which can then be incorporated into the design of movies.

Location Scouting: Using the prompt description, Open AI’s Sora can expand on location options, and even create locations that are not physically realizable. An example would be a city protruding from a planet floating around in space (I sense the next Dune movie here).

Sora AI
City protruding from a planet floating around in space.

IC INSPIRATION

Content creators have a story to tell, and fantastic content is often the product of a fantastic mind. Sora could transform how we share inspiring stories.

Just imagine for a moment how long it took to conceptualize the locations and characters needed to create a movie like The Lord of the Rings. How many sketches, paintings and 3d models they had to create until they got their “aha moment”, and finally found the perfect look for the movie.

I wonder how much time Sora AI can save film and content creators, and with it, how much money. If it is truly as intuitive as its appearing to be, then it could revolutionize the work of filmmakers, video creators, game developers, and even marketers.

A lot of campaigns are too hard to visualize. Take Colossal Biosciences as an example. They are a company that has created a de-extinction project to bring back the Woolly Mammoth. How on earth do you conceptualize the process of de-extinction in a campaign video without spending an enormous amount of money?

Sora could be just what the doctor ordered.

Continue Reading

Trending