Connect with us

Science

The Largest Body of Water in the Universe is Floating in Space. Can We Use It?

Published

on

The Largest Body of Water in the Universe is Floating in Space. Can We Use It?

The Largest Body of Water in the Universe is Floating in Space

All life comes from and is dependent upon water. When looking for hints of life in other worlds, the first thing scientists look for is water. Our Earth holds a staggering amount of it. Over a billion cubic kilometers of it are sloshing around in our oceans, lakes, and rivers.

Yet, that’s a mere drop in the bucket compared to the largest body of water in the universe. This recently discovered ocean is about 140 trillion times the size of all of Earth’s oceans combined!

That’s pretty mind-blowing.

There’s only one place an ocean that huge could be. It’s floating around in distant space.

It’s hard to imagine this much water existing out there in the universe, but that’s exactly where water originates from. The collapse of a star is actually what created the largest body of water in the universe.

Where Does Water Come From?

Water molecules consist of one oxygen and two hydrogen atoms. Both of these substances are found in abundance in space. Scientists tell us that the Big Bang created hydrogen. Oxygen also exists in the cores of massive stars.

Stars like our sun create gasses through massive nuclear events in their cores. When a smaller star collapses, it releases the gas it’s been creating. The gasses form a cloud called a nebula.

The Hubble telescope has detected water molecules within these nebulae. One nebula, called Orion, produces enough water every day to fill Earth’s oceans 60 times over!

Water exists on most of the planets in our solar system. It’s also in many moons, including Earth’s, as well as comets and asteroids. Even the rings of Saturn have a supply of water.

How Does a Star Form Water?

Water develops from the life cycle of a star. Stars come in all sizes, and bigger stars will have a much different life cycle than smaller ones.

A Star is Born

All stars begin life as a giant ball of dust and gas called a nebula.

But aren’t nebulas formed from dying stars?

When stars die, they pour out their contents throughout the galaxy and this paves the way for newer stars.

As the nebula spins faster, it gets hotter and brighter. When it reaches a certain temperature, scientists consider it a full-grown star. 

A Collapse of a Star Created the Largest Body of Water in the Universe

When a star depletes its supply of hydrogen, it expands and turns red as it cools down. This is called a red giant.

What happens after that is a direct result of the size of the star. 

  • Small Stars: When a smaller star dies, the core collapses and expels all of the star’s gasses. The small white core that remains is called a white dwarf. When it cools down, it becomes a black dwarf.
  • Large Stars: A larger star will undergo a massive explosion called a supernova. If what’s left is 1.4 to 3 times the size of our sun, it will become a neutron star, which is just the burnt-out core of the star. Here’s a fun fact: Gold is the result of neutron stars colliding!
  • Massive Stars: When the star is more than 3 times the mass of our sun, the core of the star is swallowed by its own gravity, becoming a black hole.

The biggest body of water in the universe was found around a specific kind of black hole known as a quasar.

How does a star form water. The largest body of water in the universe.

What is a Quasar?

Although scientists still have a lot to understand about quasars, it’s generally agreed that a quasar is a giant black hole located in the center of a galaxy.

The black hole slowly sucks in everything that gets too close to it. Vast amounts of matter swirl around it like water going down a bathtub drain. This large, swirling mass is called an accretion disk.

There are a lot of quasars being identified by scientists. The largest body of water in the universe comes from a quasar water reservoir known as APM 08279+5255.

How Did Scientists Find the Quasar Water Reservoir?

Scientists have used powerful telescopes to examine the water reservoir of quasar APM 08279+5255. By using spectroscopy, they have been successful in examining all the material that swirls around a quasar.

Spectroscopy is the study of light and matter when they interact. Every element in the Periodic Table of Elements has a unique composition. Each element produces a unique signature that is never duplicated by any other element. These signatures are called spectral lines. 

The spectral lines around quasar APM 08279+5255 show scientists the largest body of water found in the universe to date.

In the vacuum of space, water can’t exist as a liquid. Instead, the water molecules take on the form of a vapor like steam or fog. This is exactly what the biggest body of water in the universe looks like.

Where is the Quasar Water Reservoir?

The great distances between objects in space are staggering to the human mind.

Scientists measure space in light years. A light year is the distance light can travel in one Earth year. A single light year is about 9 trillion kilometers. That means light can travel 9 trillion kilometers in a single year.

This massive quasar water reservoir the scientists have been studying is 12 billion light years away. What scientists see through their telescopes are light images that were emitted from the quasar 12 billion years ago. So, scientists see what the quasar looked like then, not now.

In other words, they’re watching how the cosmos came into existence.

Through this discovery, scientists can see that water has been a huge part of the universe from the very beginning of time. It’s in those extremely long-ago times that the biggest body of water in the universe began to form.

How Did Earth Get Its Water?

Throughout Earth’s history, comets and asteroids have pummelled the face of the planet. About 180 craters exist around the world. These ancient asteroids and meteors brought water with them. They had collected it during their formation in the nebula of a star. This water may have been in the form of masses of ice. In Earth’s warmer atmosphere, it melted into liquid form and filled our ocean basins.

So, the next time you take a cool, refreshing sip, take a moment to realize that you are drinking the liquefied remains of an ancient star.

Water in Space for the Future

Artificial Intelligence is changing the face of space travel. And now, scientists are studying Organoid Intelligence (OI). OI aims to create a biological computer that can enhance AI. This can create breakthroughs for space travel in the future.

There is a very good reason to enhance intelligence that can help us look for water in space. In 2024, NASA will send a crew of astronauts on a trip to loop the moon for the first time since 1972. The plan is to land people on the moon by the end of that year.

The quasar water reservoir on APM 08279+5255 is fascinating. Unfortunately, the largest body of water in the universe is way too far away to ever be of use to us except as a subject of study.

However, In the not-so-distant future, astronauts will need to know how to find and use extra-terrestrial water sources. In years to come, there will be more extensive trips to Mars and other celestial bodies.

Collecting and studying lunar soil could help us gain an understanding of the composition of the water in space. This could allow us to determine the best ways to gather, process, and use it in space exploration.

It won’t be long before we’ve established moon colonies. When we do, we’ll have enough water for drinking, cleaning, and even growing vegetables.

water in space for the future. The largest body of water in the universe.

IC INSPIRATION

There’s another massive ocean in space, and it’s much closer than the quasar reservoir.

The planet Saturn has a startling 146 identified moons. One of these satellites in particular has caught the eyes of the astronomical community.

NASA teamed with other space agencies and launched a robotic spacecraft, called Cassini, to probe Saturn. Their mission was to study Saturn’s tiny moon known as Enceladus. Scientists had observed that there was some kind of unusual relationship between this moon and Saturn’s vast rings.

As the probe neared the moon and began to send back data, scientists were in for a big surprise. Enceladus has a huge ocean hidden beneath its surface. This moon has a large, liquid, saltwater ocean hidden beneath an icy shell about 30 to 40 km thick.

The sea is about six miles deep and is in constant motion. Enceladus shoots out plumes of water hundreds of miles into space. Some of it returns to the ocean or joins vaporous mist surrounding the little moon. However, some of the water keeps going and forms Saturn’s outermost ring, known as the E ring.

Of course, this incredible discovery has sparked a whole host of new questions. Scientists are exploring whether this watery world could be hosting undiscovered life. Science now considers this world to be a possibility for future human habitation.

Just think, if water from this distant ocean could be desalinated, it could provide for colonies in years to come.

The more we learn about the fantastic world of water in space, the more we will come to appreciate this precious gift.

Joy L. Magnusson is an experienced freelance writer with a special passion for nature and the environment—topics she writes about widely in publications. Her work has been featured on Our Canada Magazine, Zooanthology, Written Tales Chapbook and more.

Science

Commercial Hypersonic Travel Can Have You Flying 13,000 Miles In 10 Minutes!

Published

on

jet plane flying overseas by way of commercial hypersonic air travel.
Commercial Hypersonic Travel Can Have You Flying 13,000 Miles In 10 Minutes!

If engineers start up a hypersonic engine at the University of Central Florida (UCF) and you’re not around to hear it, does it make a sound?

Hypersonic travel is anything that travels by at least 5x more than the speed of sound. A team of aerospace engineers at UCF have created the first stable hypersonic engine, and it can have you travelling across the world at 13,000 miles per hour!

Compared to the 575 mph a typical jet flies, commercial hypersonic travel is a first-class trade-off anybody would be willing to make.

In fact, a flight from Tampa, FL to California would take nearly 5 hours on a typical commercial jet; whereas, with a commercial hypersonic aircraft, it will only take 10 minutes.

So here’s the question: When can we expect commercial hypersonic air flights?

When we stop combusting engines and start detonating them! With a little background information, you’ll be shocked to know why.

Challenges and Limitations of Commercial Hypersonic Travel

The challenge with commercial hypersonic air travel is that maintaining combustion to keep the movement of an aircraft going in a stable way becomes difficult. The difficulty comes from both the combustion and aerodynamics that happens in such high speeds.

What Engineering Challenges Arise in Controlling and Stabilizing Hypersonic Aircraft at Such High Speeds?

Combustion is the process of burning fuel. It happens when fuel mixes with air, creating a reaction that releases energy in the form of heat. This mixture of air and fuel create combustion, and combustion is what generates the thrust needed for the movement of most vehicles.

But hypersonic vehicles are quite different. A combustion engine is not very efficient for vehicles to achieve stable hypersonic speeds. For a hypersonic aircraft to fly commercially, a detonation engine is needed.

Detonation can thrust vehicles into much higher speeds than combustion, so creating a detonation engine is important for commercial hypersonic air travel. Detonation engines were thought of as impossible for a very long time, not because you couldn’t create them, but because stabilizing them is difficult.

On one hand, detonation can greatly speed up a vehicle or aircraft, but on the other hand, both the power and the speed it creates makes stabilizing the engine even harder.

a lit candle with a cloud of smoke and a lit candle showing comparison between conventional combustion with that of hypersonic travel.
Combustion vs Detonation

How Do Aerodynamic Forces Impact the Design and Operation of Hypersonic Vehicles?

Aerodynamics relates to the motion of air around an object—in this case, an aircraft. As you can imagine, friction between an aircraft and the air it travels through generates a tremendous amount of heat. The faster the vehicle, the more heat created.

Commercial hypersonic vehicles must be able to manage the heat created at hypersonic speeds to keep from being damaged altogether.

Hypersonic aircraft do exist, but only in experimental forms such as in military application. NASA’s Hyper-X program develops some of these vehicles, one of which is the X-43A which could handle hypersonic speeds of Mach 6.8 (6.8x faster than the speed of sound).

Mach Number RangeName
1.0 MachSonicExactly the seed of sound.
1.2-5 MachSupersonicFaster than the speed of sound, characterized by shock waves.
>5.0HypersonicMore than 5x speed of sound, with extreme aerodynamic heating.
Description of Mach levels

But vehicles for commercial hypersonic air travel is still a work in progress

Engineers say that we will have these vehicles by 2050, but it may even be sooner that that. Here’s why.

Future Prospects and Developments in Hypersonic Travel

The worlds first stable hypersonic engine was created back in 2020 by a team of aerospace engineers at UCF, and they have continued to refine the technology since. This work is revolutionizing hypersonic technology in a way that had been thought of as impossible just a few years ago.

To create a stable engine for commercial hypersonic air travel, an engine must first be created that can handle detonation, but not only that, this engine must actually create more detonations while controlling.

This is because in order to achieve hypersonic speeds and then keep it at that level, there needs to be repeated detonations thrusting the vehicle forward.

The development at UCF did just that. They created a Rotating Detonation Engine (RDE) called the HyperReact.

What Technological Advancements are Driving the Development of Commercial Hypersonic Travel?

When combustion happens, a large amount of energy creates a high-pressure wave known as a shockwave. This compression creates higher pressure and temperatures which inject fuel into the air stream. This mixture of air and fuel create combustion, and combustion is what generates the thrust needed for a vehicles movement.

Rotating Detonation Engines (RDEs) are quite different. The shockwave generated from the detonation are carried to the “test” section of the HyperReact where the wave repeatedly triggers detonations faster than the speed of sound (picture Wile E. Coyote lighting up his rocket to catch up to Road Runner).

Theoretically, this engine can allow for hypersonic air travel at speeds of up to 17 Mach (17x the speed of sound).

hypersonic travel engine schematics by UCF
Schematic diagram of the experimental HyperReact prototype- University of Central Florida

Hypersonic technology with the development of the Rotating Detonating Engine will pave the way for commercial hypersonic air travel. But even before that happens, RED engines will be used for space launches and eventually space exploration.

NASA has already begun testing 3D-printed Rotating Detonating Rocket Engines (RDRE) in 2024.

How Soon Can We Expect Commercial Hypersonic Travel to Become a Reality?

Since we now have the worlds first stable hypersonic engine, the worlds first commercial hypersonic flight won’t be far off. Professor Kareem Ahmed, UCF professor and team lead of the experimental HyperReact prototype, say’s its very likely we will have commercial hypersonic travel by 2050.

Its important to note that hypersonic air flight has happened before, but only in experimental form. NASA’s X-43A aircraft flew for nearly 8,000 miles at Mach 10 levels. The difference is that the X-43A flew on scramjets and not Rotating Detonation Engines (RDEs).

Scramjets are combustion engines also capable of hypersonic speeds but, which are less efficient than Rotating Detonation Engines (RDEs) because they rely on combustion, not continuous detonation.

This makes RDE’s the better choice for commercial hypersonic travel, and it explains why NASA has been testing them for space launches.

One thing is certain:

We can shoot for the stars but that shot needs to be made here on Earth… If we can land on the moon, we’ll probably have commercial hypersonic travel soon.

Clouds spelling out UCF and jet plane flying by way of commercial hypersonic air travel

IC INSPIRATION

The first successful aviation flight took place 26 years after the first patented aviation engine was created; and the first successful spaceflight happened 35 years after the first successful rocket launch.

If the world’s first stable hypersonic engine was created in 2020, how long after until we have the world’s first Mach 5+ commercial flight?

1876-1903Nicolaus Otto developed the four-stroke combustible engine in 1876 that became the basis for the Wright brothers performing the first flight ever in 1903.
1926-1961Robert H. Goddard’s first successful rocket launch in 1926 paved way for the first human spaceflight by Yuri Gagarin in 1961
2020-2050The first stable RDE was created in 2020 and history is in the making!

Shout out to Professor Kareem Ahmed and his team at UCF. They’ve set the precedent for history in the making.

Imagine travelling overseas without the long flight and difficult hauls, or RDREs so great, they reduce costs and increase the efficiency of space travel.

When time seems to be moving fast; hypersonic speeds is something I think everyone can get behind!

Continue Reading

Motivational

3D Printed Organs Save Woman’s Life and Accidentally Pave Way for Biology-Powered Artificial Intelligence

Published

on

Women showing a heart symbol with her hands in front of 3d printed organs

A Great Advancement for 3D Printed Organs

3D printing in hospitals is nothing new, but for the first time in history, a woman received a 3D printed windpipe that became a fully functional without the need for immunosuppressants.

Immunosuppressants are used during organ transplants to keep the body from attacking the organ that it see’s as foreign. This means that the organ the woman received was organic and personalized for her, as if she had it her entire life.

This mind-blowing news shows that we are now closer than ever to being able to create full-scale, functional, and complicated 3D printed organs like a heart or lung.

But what about creating a brain?

3D Printing and Organoid Intelligence

Organoid Intelligence, or OI, is an emerging field of study that is focused on creating bio-computers by merging AI with real brain cells called organoids. Organoids are miniature and simplified versions of organs grown in a lab dish. They mimic some of the functions of fully grown organs, like brains. The idea behind OI is that by increase the cells organoids contain, they may begin to function like fully grown brains, and can then be used alongside computers to enhance Artificial Intelligence.

It turns out that the world’s first 3D printed windpipe was so successful that we are now closer than ever to creating the world first organoid intelligent bio-computer.

Here’s why.

The World’s First 3D Printed Windpipe

Transplant patients usually have to take a long course of immunosuppressants that help the body accept the organ. The body see’s the organ as foreign, and so the immune system begins to attack the new organ, which can lead to more complicated health problems.

The woman in her 50’s who received the 3D printed windpipe did so without any immunosuppressants. In just 6 months after the operation, the windpipe healed and began to form blood vessels, and of course, more cells.

The current goal of scientists in the field of Organoid Intelligence is to increase organoids from 100,000 cells to 10 million, and this begs the question:

Can 3D printing help build bio-computers by creating better organoids?

Can 3D Printing Help Build Bio-Computers?

The worlds first 3D printed windpipe shows that advances in 3D printing can create better functioning organs, and this implies that we can also create more intricate organoids to help in the field of Organoid Intelligence and eventually create bio-computers.

Its important to understand the distinction between 3D printing an organ and printing something like a tool or musical instrument.

The difference between printing an organ and printing a non-biological structure depends on the ink being used in the 3D printer.

3D printing non-organic structures will require ink that can be made from plastic, plastic alternatives like PLA, metal, and ceramics. On the other hand, 3D printed organs are made from ink called “bio-inks” that are a mixture of living cells and biocompatible substances like the ones mentioned above.

In the case of the 3D printed windpipe, the ink used was partly formed from the stem and cartilage cells collected from the woman’s own nose and ear. It was because of this bio-ink that the woman’s body did not reject the organ.

The Problem With 3D Printed Organs

Organs created with bioprinting need to function like real organs for the body to safely use them, and this does not happen right away.

The 3D printed organs need to go beyond just a printed structure and become living. They need to form tissues and cells that help create biological functionality, and forming these cells take time.

The problem with 3D bioprinting is that the ink used for the printer needs to be effective at doing this, and if it is not, the organ may not stay functional.

The ink used for the 3D-printed windpipe was made from part bio-ink and part polycaprolactone (PCL), a synthetic polyester material.

PCL is a used in the 3D ink for the purposes of maintain the structure of the windpipe, while the bio-ink is used to help the 3D printed organ to become fully biological in time so that the body can use it.

The PCL maintains the structure while the bio-ink does it’s thing.

The problem with PCL is that it is biodegradable and won’t last forever. In fact, doctors don’t expect the 3D-printed windpipe to last more than five years.

The Solution is Better Bio-ink

The 3D printed windpipe was not just made using PCL, but it contained bio-ink made from living cells too. The hope is that the living cells in the 3D printed organ—which came from the bio-ink—will assist the patient’s body in creating a fully functional windpipe to replace the PCL’s function.

If the organ begins to form cells and tissue by itself, then the function of PCL will be replaced by the biological function of the organ that is growing.

The organ becomes real!

Bio-Ink helps the 3D printed organ mimic it’s natural environment of cells and eventually become a real organ.

3D Printing Organs Will Save Lives

Every year, thousands of people need a lifesaving organ transplant. These transplants cost hundreds of thousand of dollars, and many people who need them don’t make it passed the waiting list.

3D Printing organs could give people the incredible opportunity to receive the help they need when they need it, saving thousands of lives annually, and millions of lives in the long run.

As advances are made in 3D Bioprinting, they will also be made in areas of Organoid and Artificial Intelligence, which shows that the progress being made in one place will once again shine its way to another.

3d printed organ. A brain being created by 3d printers.

IC Inspiration:

If we can create better forms of bio-ink and produce fully functional organs using 3D printing, we will fundamentally change the entire health care system.

17 people die every single day waiting for an organ transplant, many of whom can’t afford the transplant in the first place.

The biggest hope in the world for everyone that is affected by this is that organs can be produced when they are needed, ending the transplant shortage and saving the incredible lives of millions of people in the future.

We have seen from this story that personalized organs made from a patients own cells can stop the bodies rejection of organs. This shows us that there will come a time when there will be no need for immunosuppressants therapy.

Even more amazing is that doctors use 3D printing to practice performing a surgery so that they can sharpen their skills before the surgery. This also helps them find better pathways for performing the surgery.

Think about it… If you can’t use a real organ to practice on, then 3D organs are the next best thing.

The production of organs, the irrelevancy of immunosuppressants, and more efficient surgery will eventually drive down the prices of transplants, and 3D printing organs in the future will not only save lives, but it will also increase the quality of those lives afterwards.

That is the sort of world we can create. It’s amazing to think of all the good that is being done right here, right now.

Continue Reading

Science

Sora AI is Every Content Creators Dream. Its Almost Here!

Published

on

OpenAI’s Sora

Sora is the Japanese word for sky, our blue expanse that is often associated with limitless dreams and possibilities.

You may have heard that OpenAI is also releasing an AI video generator called Sora AI. With It’s fantastical visuals and life-like video, it’s without a doubt the top 5 AI technologies in 2024.

OpenAI recently launched Sora’s first short video, “Air Head”, and if it proves anything, its that Sora is every content creator’s dream turned reality.

But if you’re not convinced, perhaps this video might help. Here’s a little game called, “can you spot the AI video”?

How Can Sora AI Help Content Creators?

Video producers, film makers, animator, visual artists, and game developers all have one thing in common: They are always looking for the next big thing in creative expression. Sora AI is a tool that can greatly enhance the ability content creators have to fuel their imagination and connect with their audiences.

A misconception is that AI is going to replace human artists, videographers, and animators. But if Sora’s first short film has shown anything, its that a team was still needed to create the story, narrate the scenes, and edit the videos to create the final production.

Sora won’t replace artists; it will equip them with tools to express their artistry in different ways.

Sora’s First Short Film

Auteur and Toronto-based multimedia company, Shy Kids, was granted early access to Sora AI. Shy Kids is among the few granted early access to the AI video generator for the sake of testing and refining it before launch. The video the artists generated using Sora AI is called “Air Head”.

Pretty mind-blowing to think that one day, we might be able to create an entire movie with the main character as a balloon. Think of the comedies we can create.

How Does Sora AI Work?

Sora’s first short film “Air Head” shows that Sora AI is the most advanced AI-powered video generator tool in history. Sora creates realistic and detailed 60-second videos of any topic, realistic or fantasy. It only needs a prompt from the user to build on existing information and develop whole new worlds.

What We Know So Far

Sora AI is a new technology with limited access. There’s a strategic reason to limit information of a new technology, and it’s to manage the publics expectations while polishing the final product. Sora is a very powerful tool. It might be necessary to have strong safeguards and build guidelines before releasing it. Here’s what we know so far.

Sora Release Date

OpenAI has not provided any specific release date for public availability or even a waiting list. However, many sources indicate that it may be released in the second half of 2024. Currently, Sora AI is only being made available to testers called “red teamers”, and a select group of designers—like Shy Kids— have been granted access.

Sora Price

Open AI has not yet released a price for Sora AI and has made no comment on whether there will be a free version like its other AI models. Based on other AI text-to-video generators, its likely that there won’t be a free version, and that Sora will offer a tiered subscription model that caters to users who want to dish out videos regularly.

There is also a possibility of a credit-based system, similar to its competitor RunwayML. A credit-based system is where users purchase credits, and each credit is used for a specific task related to generating a video.

Sora’s Video Length

OpenAI has said Sora can generate videos of up to a minute long with visual consistency. Scientific America states that users will be able to increase the length of the video by adding additional clips to a sequence. Sora’s first short film “Air Head” ran for a minute and twenty seconds, which indicates that Sora’s video length can be anywhere between 60-90 seconds.

Sora’s Video Generation Time

OpenAI has not revealed how long it will take Sora AI to generate a video; however, Sora will use NVIDIA H100 AI GPUs. These are GPUs designed to handle complex artificial intelligence tasks. According to estimates provided by Factorial Funds, these GPUs will allow Open AI’s Sora to create a one minute video in approximately twelve minutes.

How is Sora AI Different from Other Video Generators?

Many text-to-video generators have trouble maintaining visual coherency. They will often add visuals that are completely different for one another for each scene. This requires the videos to be further edited. In some cases, it takes longer to create the video you want by using AI than it does by creating it yourself.

Sora AI seems to surpass other text-to-video generators in the level of detail and realism it creates. It has a deeper understanding of how the physical world operates.

It Brings Motion to Pictures

Another feature that Sora AI has is its still-life photo prompts. Sora will be able to take a still-life photo, such as a portrait, and bring it to life by adding realistic movement and expression to the subject. This means that you can generate images using OpenAI’s DALL. E model, and then prompt it with the desired text of what you would like the image to do.

This is like something out of Harry Potter. One of the biggest worries is that Sora AI will be able to depict someone saying or doing something they never did. I don’t think the world’s ready for another Elon Musk Deepfake.

Will Sora AI Undermine Our Trust In Videos?

There are over 700 AI-managed fake news sites across the world. OpenAI is already working with red teamers—experts in areas of false content—to help prevent the use of Sora AI in a way that can undermine our trust in videos.

Detection classifiers will play a big role in the future of AI. Among these detection classifiers are tools that can detect AI in writing, and content credentials that show whether an image was made using AI within the contents metadata.

AI image generators like Adobe Firefly are already using content credentials for their images.

Why do Sora AI Videos Look So Good?

Sora AI generates it’s videos using ‘spacetime patches’. Spacetime patches are small segments of video that allow Sora to analyze complex visual information by capturing both appearance and movement in an effective way. This creates a more realistic and dynamic video, as opposed to other AI video generators that have fixed-size inputs and outputs.

One comment said Sora AI videos are like dreams, only clearer… That’s not a bad way to put it. Afterall, dreams are like movies our brains create, and anyone who increases their REM sleep will understand. But speaking of movies, how will Sora AI affect Hollywood?

Can Sora AI Replace Movies?

As amazing as OpenAI’s text-to-video generator is, it can’t replace actors and use them in a prolonged storyline, but it can help producers create some fantastic movies. Sora AI can be used to create pre-visuals, concept art, and help producers scout potential locations.

Pre-visualization: Sora can turn scripts into visual concepts to help both directors and actors plan complex shots.

Concept Art Creation: It can be used to generate unique characters and fantastical landscapes which can then be incorporated into the design of movies.

Location Scouting: Using the prompt description, Open AI’s Sora can expand on location options, and even create locations that are not physically realizable. An example would be a city protruding from a planet floating around in space (I sense the next Dune movie here).

Sora AI
City protruding from a planet floating around in space.

IC INSPIRATION

Content creators have a story to tell, and fantastic content is often the product of a fantastic mind. Sora could transform how we share inspiring stories.

Just imagine for a moment how long it took to conceptualize the locations and characters needed to create a movie like The Lord of the Rings. How many sketches, paintings and 3d models they had to create until they got their “aha moment”, and finally found the perfect look for the movie.

I wonder how much time Sora AI can save film and content creators, and with it, how much money. If it is truly as intuitive as its appearing to be, then it could revolutionize the work of filmmakers, video creators, game developers, and even marketers.

A lot of campaigns are too hard to visualize. Take Colossal Biosciences as an example. They are a company that has created a de-extinction project to bring back the Woolly Mammoth. How on earth do you conceptualize the process of de-extinction in a campaign video without spending an enormous amount of money?

Sora could be just what the doctor ordered.

Continue Reading

Trending