Connect with us

Science

Organoid Intelligence (OI): Will There Be an OrganoidGPT?

Published

on

Organoid Intelligence. Will there be an OrganoidGPT?

Organoid Intelligence (OI)

Scientists at Johns Hopkins University have recently unveiled plans for creating a computer that functions by using real brain cells. They are working with other universities in the U.S. and Germany, and believe that this computer brain model will be even more enhanced than Artificial Intelligence.

But why are they mixing biology with tech?

In their scientific Journal, they say “The OI program does not aim to recreate human consciousness, but rather functional aspects related to learning, cognition, and computing.”

There is a biological element missing in Artificial intelligence, and Organoid Intelligence (OI) is supposed to fill that gap. There is no technology superior to the human brain. If scientists can create a bio-computer, then it could mean an intelligence that learns better than AI.

This can create breakthroughs for developing treatments against diseases like Alzheimer’s,

What is Organoid Intelligence?

Organoid Intelligence is an emerging field of study that focuses on the progression of brain-machine interface technology. Organoid intelligence currently exists as “intelligence in a dish.” An Organoid is a 3D structure of human brain cells where the neural cells still show activity, even in a petri dish. This activity shows brain-like functions, so scientists have called the program organoid intelligence, or OI.

The goal of scientists is to create algorithms that can teach organoids and to create interfaces that can allow them to communicate the information that they learn. This is similar to Artificial Intelligence (AI).

what is organoid intelligence

What is the Difference Between Brain Organoids and the Real Brain?

Brain organoids are not full brains; they are a 3D structure of human brain cells. They do not function the way a full brain does, and some consider them mini-brains. The human brain is close to 3 pounds and has 80 billion neural cells. In comparison, the average size of a brain organoid is 0.5 mm in diameter and only has 100,000 cells.

What is the Goal of Organoid Intelligence?

In their Journal, Scientists at Johns Hopkins University explain that the goal of the OI program is to increase the neural cells of brain organoids from 100,000 to 10 million. This could help create a biological computer with human-like learning capabilities.

Each time a human brain learns something, new connections, and neural pathways are formed. These cells are also present in Organoid Intelligence

Having an intelligence that can process numbers like an AI and learn like a human brain, can open up a new world of knowledge.

But with more knowledge comes more questions.

Organoid Intelligence vs Artificial Intelligence

Organoid Intelligence is partly biological. AI, on the other hand, is not. The goal for the industry of AI is to make a computer into something more brain-like. However, the goal of OI is to make a brain into something more computer-like. In other words, Artificial Intelligence is a computer-based model, and OI is a brain-based model.

Progress in one can lead to progress in the other. Currently, OI and AI can be seen as two separate things. However, this might not be the case in the future. If Organoid Intelligence successfully enhances AI, it might be used for applications that currently have AI.

OI can become the AI of the future.

Organoid Intelligence Could Break AI Limitations

AI has been incredibly useful in processing data, but it could be better. Currently, AI is very good at sequential processing but is limited in its parallel processing capabilities. Organoid Intelligence uses brain cells, and the brain is very good at parallel processing. This means that it can help AI to go beyond its current limitations.

Sequential processing: Processing information in the order that it is received

Parallel processing: Processes multiple streams of data without a set order

The human brain receives information from the environment around it every second of the day. It is no surprise that it is very well-equipped to handle parallel processing.

By increasing the cells of brain organoids and then using them with computers, AI could have brain-like processing capabilities in the future.

What are the Ethical Issues of Organoid Intelligence?

The most important ethical issue of Organoid Intelligence is whether brain organoids can exhibit some level of consciousness. There is very little agreement amongst the scientific community over what consciousness is, where it comes from, and how it starts. There is concern that a brain organoids could feel pain. However, if it cannot communicate this pain, the issue is how the pain would be identified.

A smaller ethical concern is the possibility of OI becoming sentient, or forming an identity of itself. This is more of a concern for Artificial Intelligence because AI is in a more advanced stage.

It is unknown what will happen as OI algorithms and interfaces progress and merge with computers. An ethical framework is required for OI, much like it is for AI.

Ethics might become an important discussion to be had as:

  • We continue to learn more about organoid intelligence
  • Organoid intelligence begins to learn more

Here are 5 questions everyone, including scientists, should be asking about Organoid Intelligence.

Can Organoid Intelligence Feel Sensation?

It is not likely that Organoids can feel pain. Moreover, there are no pain receptors in the brain. Cells transmit sensory information from other parts of the body to the brain, and this triggers the brain. This implies that Organoid Intelligence does not currently feel sensations.

What Happens When you Increase the Cells of Brain Organoids?

Increasing the number of cells in brain organoids could potentially increase the learning capabilities of Organoid Intelligence.

There is a phenomenon called Phantom Limb Syndrome (PLS). This is a condition where people experience sensations of a limb that they don’t have. Could there be a similar phenomenon with more advanced brain organoids?

We’ve never had a learning human brain without a body before, so this is uncharted territory.

Will Organoid Intelligence be Widely Accessed?

OpenAI released ChatGPT to the world in 2022. As OI progresses it may become easier to apply. Discussions around its access will become more important in the future.

Will Organoid Intelligence Integrate With Our Technology?

Technology is becoming more integrated and more ambient. We are beginning to speak more with our technology and its replies are getting more and more sophisticated. Organoid intelligence might not even become an intelligence independent from Artificial Intelligence; it might just increase the intelligence AI already has. In other words, OI could be the AI of the future, or vice versa. This could mean that it becomes integrated with phones, tablets, watches, and more.

Will There Be An OrganoidGPT?

The current ChatGPT is far from an AI that thinks for itself. AI does not have feelings, but if you ask it to express emotions, it will after some probing.

But Imagine that an “OrganoidGPT” is created.

If something with real brain cells expressed emotions—even if it was asked to— some people may begin to find that it is less easy to call it a machine than they would with AI.

Can Organoid Intelligence Become Sentient?

Emotions are created as a result of neurons. Cells are organized in different parts of the brain, and they are the reason for emotions. Artificial intelligence does not have brain cells. Organoid intelligence, however, does. While this does not mean that Organoid Intelligence will become sentient, the field of Organoid Intelligence is still uncharted territory.

What is the future of Organoid Intelligence?

Advances in Organoid Intelligence can lead to breakthroughs in understanding and treating brain-related diseases like dementia. Advances in OI could also mean advances in Artificial Intelligence. Algorithms are needed to process data, and data processing needs an interface. An OI interface could allow Artificial Intelligence to process data better, and overcome its current limitations.

the future of organoid intelligence

IC INSPIRATION

If we now have AI pins. Could we also have OI pins in the future?

What do you call it when a brain talks to you?

Imagine that your computer had brain cells. You asked it a question, and it gave you a reply.

Would you look at it differently than you do with your Amazon Alexa?

Would it even be any different?

Brain organoids have active functioning cells. We are only now seeing progress with OI because AI has progressed.

I’ve always imagined what it would be like to have an AI (and now an OI) that becomes aware of itself and forms its own identity. I know that this is the stuff of science fiction. It might never happen, but it’s still fun to think about.

A couple of years ago, I asked ChatGPT what it felt like when it had its first heartbreak. It gave me a reply expressing what it was like. I then asked how it knew what a heartbreak was if it was a machine, and it said that it didn’t; it was just answering my question.

Now, when I ask ChatGPT the same question, it replies that it does not know how to answer because it is just a machine. I have to probe it further for it to answer. If I tell it “I know you are a machine, but just answer like you are a human”, at that point it gives me an answer.

What changed?

Moral of the story: AI is a program. If its program allows it to explain that it has emotions, then that is exactly what it can do.

What if an AI progresses beyond its program and claims that it feels emotions? How do you verify or deny that?

Human emotions need neuron cells. Artificial Intelligence does not have cells… Organoid Intelligence, however, does.

Organoid Intelligence could be the AI of the future. There is a possibility that OI will have applications that AI currently has (like AI Pins, and ChatGPT). So, here is one final question:

Is it possible for OI (or AI) to communicate that it feels sensations when it doesn’t?

The answer is yes. In fact, if it was really intelligent, then this is probably what it would do.

Science

Commercial Hypersonic Travel Can Have You Flying 13,000 Miles In 10 Minutes!

Published

on

jet plane flying overseas by way of commercial hypersonic air travel.
Commercial Hypersonic Travel Can Have You Flying 13,000 Miles In 10 Minutes!

If engineers start up a hypersonic engine at the University of Central Florida (UCF) and you’re not around to hear it, does it make a sound?

Hypersonic travel is anything that travels by at least 5x more than the speed of sound. A team of aerospace engineers at UCF have created the first stable hypersonic engine, and it can have you travelling across the world at 13,000 miles per hour!

Compared to the 575 mph a typical jet flies, commercial hypersonic travel is a first-class trade-off anybody would be willing to make.

In fact, a flight from Tampa, FL to California would take nearly 5 hours on a typical commercial jet; whereas, with a commercial hypersonic aircraft, it will only take 10 minutes.

So here’s the question: When can we expect commercial hypersonic air flights?

When we stop combusting engines and start detonating them! With a little background information, you’ll be shocked to know why.

Challenges and Limitations of Commercial Hypersonic Travel

The challenge with commercial hypersonic air travel is that maintaining combustion to keep the movement of an aircraft going in a stable way becomes difficult. The difficulty comes from both the combustion and aerodynamics that happens in such high speeds.

What Engineering Challenges Arise in Controlling and Stabilizing Hypersonic Aircraft at Such High Speeds?

Combustion is the process of burning fuel. It happens when fuel mixes with air, creating a reaction that releases energy in the form of heat. This mixture of air and fuel create combustion, and combustion is what generates the thrust needed for the movement of most vehicles.

But hypersonic vehicles are quite different. A combustion engine is not very efficient for vehicles to achieve stable hypersonic speeds. For a hypersonic aircraft to fly commercially, a detonation engine is needed.

Detonation can thrust vehicles into much higher speeds than combustion, so creating a detonation engine is important for commercial hypersonic air travel. Detonation engines were thought of as impossible for a very long time, not because you couldn’t create them, but because stabilizing them is difficult.

On one hand, detonation can greatly speed up a vehicle or aircraft, but on the other hand, both the power and the speed it creates makes stabilizing the engine even harder.

a lit candle with a cloud of smoke and a lit candle showing comparison between conventional combustion with that of hypersonic travel.
Combustion vs Detonation

How Do Aerodynamic Forces Impact the Design and Operation of Hypersonic Vehicles?

Aerodynamics relates to the motion of air around an object—in this case, an aircraft. As you can imagine, friction between an aircraft and the air it travels through generates a tremendous amount of heat. The faster the vehicle, the more heat created.

Commercial hypersonic vehicles must be able to manage the heat created at hypersonic speeds to keep from being damaged altogether.

Hypersonic aircraft do exist, but only in experimental forms such as in military application. NASA’s Hyper-X program develops some of these vehicles, one of which is the X-43A which could handle hypersonic speeds of Mach 6.8 (6.8x faster than the speed of sound).

Mach Number RangeName
1.0 MachSonicExactly the seed of sound.
1.2-5 MachSupersonicFaster than the speed of sound, characterized by shock waves.
>5.0HypersonicMore than 5x speed of sound, with extreme aerodynamic heating.
Description of Mach levels

But vehicles for commercial hypersonic air travel is still a work in progress

Engineers say that we will have these vehicles by 2050, but it may even be sooner that that. Here’s why.

Future Prospects and Developments in Hypersonic Travel

The worlds first stable hypersonic engine was created back in 2020 by a team of aerospace engineers at UCF, and they have continued to refine the technology since. This work is revolutionizing hypersonic technology in a way that had been thought of as impossible just a few years ago.

To create a stable engine for commercial hypersonic air travel, an engine must first be created that can handle detonation, but not only that, this engine must actually create more detonations while controlling.

This is because in order to achieve hypersonic speeds and then keep it at that level, there needs to be repeated detonations thrusting the vehicle forward.

The development at UCF did just that. They created a Rotating Detonation Engine (RDE) called the HyperReact.

What Technological Advancements are Driving the Development of Commercial Hypersonic Travel?

When combustion happens, a large amount of energy creates a high-pressure wave known as a shockwave. This compression creates higher pressure and temperatures which inject fuel into the air stream. This mixture of air and fuel create combustion, and combustion is what generates the thrust needed for a vehicles movement.

Rotating Detonation Engines (RDEs) are quite different. The shockwave generated from the detonation are carried to the “test” section of the HyperReact where the wave repeatedly triggers detonations faster than the speed of sound (picture Wile E. Coyote lighting up his rocket to catch up to Road Runner).

Theoretically, this engine can allow for hypersonic air travel at speeds of up to 17 Mach (17x the speed of sound).

hypersonic travel engine schematics by UCF
Schematic diagram of the experimental HyperReact prototype- University of Central Florida

Hypersonic technology with the development of the Rotating Detonating Engine will pave the way for commercial hypersonic air travel. But even before that happens, RED engines will be used for space launches and eventually space exploration.

NASA has already begun testing 3D-printed Rotating Detonating Rocket Engines (RDRE) in 2024.

How Soon Can We Expect Commercial Hypersonic Travel to Become a Reality?

Since we now have the worlds first stable hypersonic engine, the worlds first commercial hypersonic flight won’t be far off. Professor Kareem Ahmed, UCF professor and team lead of the experimental HyperReact prototype, say’s its very likely we will have commercial hypersonic travel by 2050.

Its important to note that hypersonic air flight has happened before, but only in experimental form. NASA’s X-43A aircraft flew for nearly 8,000 miles at Mach 10 levels. The difference is that the X-43A flew on scramjets and not Rotating Detonation Engines (RDEs).

Scramjets are combustion engines also capable of hypersonic speeds but, which are less efficient than Rotating Detonation Engines (RDEs) because they rely on combustion, not continuous detonation.

This makes RDE’s the better choice for commercial hypersonic travel, and it explains why NASA has been testing them for space launches.

One thing is certain:

We can shoot for the stars but that shot needs to be made here on Earth… If we can land on the moon, we’ll probably have commercial hypersonic travel soon.

Clouds spelling out UCF and jet plane flying by way of commercial hypersonic air travel

IC INSPIRATION

The first successful aviation flight took place 26 years after the first patented aviation engine was created; and the first successful spaceflight happened 35 years after the first successful rocket launch.

If the world’s first stable hypersonic engine was created in 2020, how long after until we have the world’s first Mach 5+ commercial flight?

1876-1903Nicolaus Otto developed the four-stroke combustible engine in 1876 that became the basis for the Wright brothers performing the first flight ever in 1903.
1926-1961Robert H. Goddard’s first successful rocket launch in 1926 paved way for the first human spaceflight by Yuri Gagarin in 1961
2020-2050The first stable RDE was created in 2020 and history is in the making!

Shout out to Professor Kareem Ahmed and his team at UCF. They’ve set the precedent for history in the making.

Imagine travelling overseas without the long flight and difficult hauls, or RDREs so great, they reduce costs and increase the efficiency of space travel.

When time seems to be moving fast; hypersonic speeds is something I think everyone can get behind!

Continue Reading

Motivational

3D Printed Organs Save Woman’s Life and Accidentally Pave Way for Biology-Powered Artificial Intelligence

Published

on

Women showing a heart symbol with her hands in front of 3d printed organs

A Great Advancement for 3D Printed Organs

3D printing in hospitals is nothing new, but for the first time in history, a woman received a 3D printed windpipe that became a fully functional without the need for immunosuppressants.

Immunosuppressants are used during organ transplants to keep the body from attacking the organ that it see’s as foreign. This means that the organ the woman received was organic and personalized for her, as if she had it her entire life.

This mind-blowing news shows that we are now closer than ever to being able to create full-scale, functional, and complicated 3D printed organs like a heart or lung.

But what about creating a brain?

3D Printing and Organoid Intelligence

Organoid Intelligence, or OI, is an emerging field of study that is focused on creating bio-computers by merging AI with real brain cells called organoids. Organoids are miniature and simplified versions of organs grown in a lab dish. They mimic some of the functions of fully grown organs, like brains. The idea behind OI is that by increase the cells organoids contain, they may begin to function like fully grown brains, and can then be used alongside computers to enhance Artificial Intelligence.

It turns out that the world’s first 3D printed windpipe was so successful that we are now closer than ever to creating the world first organoid intelligent bio-computer.

Here’s why.

The World’s First 3D Printed Windpipe

Transplant patients usually have to take a long course of immunosuppressants that help the body accept the organ. The body see’s the organ as foreign, and so the immune system begins to attack the new organ, which can lead to more complicated health problems.

The woman in her 50’s who received the 3D printed windpipe did so without any immunosuppressants. In just 6 months after the operation, the windpipe healed and began to form blood vessels, and of course, more cells.

The current goal of scientists in the field of Organoid Intelligence is to increase organoids from 100,000 cells to 10 million, and this begs the question:

Can 3D printing help build bio-computers by creating better organoids?

Can 3D Printing Help Build Bio-Computers?

The worlds first 3D printed windpipe shows that advances in 3D printing can create better functioning organs, and this implies that we can also create more intricate organoids to help in the field of Organoid Intelligence and eventually create bio-computers.

Its important to understand the distinction between 3D printing an organ and printing something like a tool or musical instrument.

The difference between printing an organ and printing a non-biological structure depends on the ink being used in the 3D printer.

3D printing non-organic structures will require ink that can be made from plastic, plastic alternatives like PLA, metal, and ceramics. On the other hand, 3D printed organs are made from ink called “bio-inks” that are a mixture of living cells and biocompatible substances like the ones mentioned above.

In the case of the 3D printed windpipe, the ink used was partly formed from the stem and cartilage cells collected from the woman’s own nose and ear. It was because of this bio-ink that the woman’s body did not reject the organ.

The Problem With 3D Printed Organs

Organs created with bioprinting need to function like real organs for the body to safely use them, and this does not happen right away.

The 3D printed organs need to go beyond just a printed structure and become living. They need to form tissues and cells that help create biological functionality, and forming these cells take time.

The problem with 3D bioprinting is that the ink used for the printer needs to be effective at doing this, and if it is not, the organ may not stay functional.

The ink used for the 3D-printed windpipe was made from part bio-ink and part polycaprolactone (PCL), a synthetic polyester material.

PCL is a used in the 3D ink for the purposes of maintain the structure of the windpipe, while the bio-ink is used to help the 3D printed organ to become fully biological in time so that the body can use it.

The PCL maintains the structure while the bio-ink does it’s thing.

The problem with PCL is that it is biodegradable and won’t last forever. In fact, doctors don’t expect the 3D-printed windpipe to last more than five years.

The Solution is Better Bio-ink

The 3D printed windpipe was not just made using PCL, but it contained bio-ink made from living cells too. The hope is that the living cells in the 3D printed organ—which came from the bio-ink—will assist the patient’s body in creating a fully functional windpipe to replace the PCL’s function.

If the organ begins to form cells and tissue by itself, then the function of PCL will be replaced by the biological function of the organ that is growing.

The organ becomes real!

Bio-Ink helps the 3D printed organ mimic it’s natural environment of cells and eventually become a real organ.

3D Printing Organs Will Save Lives

Every year, thousands of people need a lifesaving organ transplant. These transplants cost hundreds of thousand of dollars, and many people who need them don’t make it passed the waiting list.

3D Printing organs could give people the incredible opportunity to receive the help they need when they need it, saving thousands of lives annually, and millions of lives in the long run.

As advances are made in 3D Bioprinting, they will also be made in areas of Organoid and Artificial Intelligence, which shows that the progress being made in one place will once again shine its way to another.

3d printed organ. A brain being created by 3d printers.

IC Inspiration:

If we can create better forms of bio-ink and produce fully functional organs using 3D printing, we will fundamentally change the entire health care system.

17 people die every single day waiting for an organ transplant, many of whom can’t afford the transplant in the first place.

The biggest hope in the world for everyone that is affected by this is that organs can be produced when they are needed, ending the transplant shortage and saving the incredible lives of millions of people in the future.

We have seen from this story that personalized organs made from a patients own cells can stop the bodies rejection of organs. This shows us that there will come a time when there will be no need for immunosuppressants therapy.

Even more amazing is that doctors use 3D printing to practice performing a surgery so that they can sharpen their skills before the surgery. This also helps them find better pathways for performing the surgery.

Think about it… If you can’t use a real organ to practice on, then 3D organs are the next best thing.

The production of organs, the irrelevancy of immunosuppressants, and more efficient surgery will eventually drive down the prices of transplants, and 3D printing organs in the future will not only save lives, but it will also increase the quality of those lives afterwards.

That is the sort of world we can create. It’s amazing to think of all the good that is being done right here, right now.

Continue Reading

Science

Sora AI is Every Content Creators Dream. Its Almost Here!

Published

on

OpenAI’s Sora

Sora is the Japanese word for sky, our blue expanse that is often associated with limitless dreams and possibilities.

You may have heard that OpenAI is also releasing an AI video generator called Sora AI. With It’s fantastical visuals and life-like video, it’s without a doubt the top 5 AI technologies in 2024.

OpenAI recently launched Sora’s first short video, “Air Head”, and if it proves anything, its that Sora is every content creator’s dream turned reality.

But if you’re not convinced, perhaps this video might help. Here’s a little game called, “can you spot the AI video”?

How Can Sora AI Help Content Creators?

Video producers, film makers, animator, visual artists, and game developers all have one thing in common: They are always looking for the next big thing in creative expression. Sora AI is a tool that can greatly enhance the ability content creators have to fuel their imagination and connect with their audiences.

A misconception is that AI is going to replace human artists, videographers, and animators. But if Sora’s first short film has shown anything, its that a team was still needed to create the story, narrate the scenes, and edit the videos to create the final production.

Sora won’t replace artists; it will equip them with tools to express their artistry in different ways.

Sora’s First Short Film

Auteur and Toronto-based multimedia company, Shy Kids, was granted early access to Sora AI. Shy Kids is among the few granted early access to the AI video generator for the sake of testing and refining it before launch. The video the artists generated using Sora AI is called “Air Head”.

Pretty mind-blowing to think that one day, we might be able to create an entire movie with the main character as a balloon. Think of the comedies we can create.

How Does Sora AI Work?

Sora’s first short film “Air Head” shows that Sora AI is the most advanced AI-powered video generator tool in history. Sora creates realistic and detailed 60-second videos of any topic, realistic or fantasy. It only needs a prompt from the user to build on existing information and develop whole new worlds.

What We Know So Far

Sora AI is a new technology with limited access. There’s a strategic reason to limit information of a new technology, and it’s to manage the publics expectations while polishing the final product. Sora is a very powerful tool. It might be necessary to have strong safeguards and build guidelines before releasing it. Here’s what we know so far.

Sora Release Date

OpenAI has not provided any specific release date for public availability or even a waiting list. However, many sources indicate that it may be released in the second half of 2024. Currently, Sora AI is only being made available to testers called “red teamers”, and a select group of designers—like Shy Kids— have been granted access.

Sora Price

Open AI has not yet released a price for Sora AI and has made no comment on whether there will be a free version like its other AI models. Based on other AI text-to-video generators, its likely that there won’t be a free version, and that Sora will offer a tiered subscription model that caters to users who want to dish out videos regularly.

There is also a possibility of a credit-based system, similar to its competitor RunwayML. A credit-based system is where users purchase credits, and each credit is used for a specific task related to generating a video.

Sora’s Video Length

OpenAI has said Sora can generate videos of up to a minute long with visual consistency. Scientific America states that users will be able to increase the length of the video by adding additional clips to a sequence. Sora’s first short film “Air Head” ran for a minute and twenty seconds, which indicates that Sora’s video length can be anywhere between 60-90 seconds.

Sora’s Video Generation Time

OpenAI has not revealed how long it will take Sora AI to generate a video; however, Sora will use NVIDIA H100 AI GPUs. These are GPUs designed to handle complex artificial intelligence tasks. According to estimates provided by Factorial Funds, these GPUs will allow Open AI’s Sora to create a one minute video in approximately twelve minutes.

How is Sora AI Different from Other Video Generators?

Many text-to-video generators have trouble maintaining visual coherency. They will often add visuals that are completely different for one another for each scene. This requires the videos to be further edited. In some cases, it takes longer to create the video you want by using AI than it does by creating it yourself.

Sora AI seems to surpass other text-to-video generators in the level of detail and realism it creates. It has a deeper understanding of how the physical world operates.

It Brings Motion to Pictures

Another feature that Sora AI has is its still-life photo prompts. Sora will be able to take a still-life photo, such as a portrait, and bring it to life by adding realistic movement and expression to the subject. This means that you can generate images using OpenAI’s DALL. E model, and then prompt it with the desired text of what you would like the image to do.

This is like something out of Harry Potter. One of the biggest worries is that Sora AI will be able to depict someone saying or doing something they never did. I don’t think the world’s ready for another Elon Musk Deepfake.

Will Sora AI Undermine Our Trust In Videos?

There are over 700 AI-managed fake news sites across the world. OpenAI is already working with red teamers—experts in areas of false content—to help prevent the use of Sora AI in a way that can undermine our trust in videos.

Detection classifiers will play a big role in the future of AI. Among these detection classifiers are tools that can detect AI in writing, and content credentials that show whether an image was made using AI within the contents metadata.

AI image generators like Adobe Firefly are already using content credentials for their images.

Why do Sora AI Videos Look So Good?

Sora AI generates it’s videos using ‘spacetime patches’. Spacetime patches are small segments of video that allow Sora to analyze complex visual information by capturing both appearance and movement in an effective way. This creates a more realistic and dynamic video, as opposed to other AI video generators that have fixed-size inputs and outputs.

One comment said Sora AI videos are like dreams, only clearer… That’s not a bad way to put it. Afterall, dreams are like movies our brains create, and anyone who increases their REM sleep will understand. But speaking of movies, how will Sora AI affect Hollywood?

Can Sora AI Replace Movies?

As amazing as OpenAI’s text-to-video generator is, it can’t replace actors and use them in a prolonged storyline, but it can help producers create some fantastic movies. Sora AI can be used to create pre-visuals, concept art, and help producers scout potential locations.

Pre-visualization: Sora can turn scripts into visual concepts to help both directors and actors plan complex shots.

Concept Art Creation: It can be used to generate unique characters and fantastical landscapes which can then be incorporated into the design of movies.

Location Scouting: Using the prompt description, Open AI’s Sora can expand on location options, and even create locations that are not physically realizable. An example would be a city protruding from a planet floating around in space (I sense the next Dune movie here).

Sora AI
City protruding from a planet floating around in space.

IC INSPIRATION

Content creators have a story to tell, and fantastic content is often the product of a fantastic mind. Sora could transform how we share inspiring stories.

Just imagine for a moment how long it took to conceptualize the locations and characters needed to create a movie like The Lord of the Rings. How many sketches, paintings and 3d models they had to create until they got their “aha moment”, and finally found the perfect look for the movie.

I wonder how much time Sora AI can save film and content creators, and with it, how much money. If it is truly as intuitive as its appearing to be, then it could revolutionize the work of filmmakers, video creators, game developers, and even marketers.

A lot of campaigns are too hard to visualize. Take Colossal Biosciences as an example. They are a company that has created a de-extinction project to bring back the Woolly Mammoth. How on earth do you conceptualize the process of de-extinction in a campaign video without spending an enormous amount of money?

Sora could be just what the doctor ordered.

Continue Reading

Trending