Connect with us

Science

How AI in Space Exploration is Revolutionizing Space Travel

Published

on

The Use of AI in Space Exploration

Space programs have long been criticized for being dangerous, expensive, and impractical, but the use of AI in space exploration is changing that. The latest deep learning algorithms are increasingly being integrated into our spacecraft, rovers, and radio telescopes to make space travel safer, more affordable, and ultimately more meaningful.

From autonomous floating droids assisting astronauts to scientists calculating vast interstellar star maps, it’s certainly exciting to live in a time where artificial intelligence and space exploration are no longer the stuff of fiction.

How is AI used in Space Travel?

Space travel is no simple task, but when AI is used in space travel, deep learning algorithms can more accurately measure atmospheric conditions. This improves fuel expenditure by navigating a more effective route. Many of the automated systems that keep our astronauts alive are beginning to benefit from advances in AI being used in space. The further that AI is integrated into our spacecraft, the safer it will be for the craft to dock with the international space station and extend its landing gear on its return to our planet.

The CIMON 2

One new and exciting addition to the final frontier has been the CIMON 2 – an autonomous robot designed to assist astronauts both technically and emotionally. Think of it as a lopsided basketball-sized Amazon Alexa with a touchscreen interface on its flat side. CIMON 2 can propel itself through the zero gravity of the cabin using small fans and air tubes. 

CIMON 2 is programmed to analyze tone of voice to assess levels of stress which is vital in such an unpredictable place as space. It also serves as a database of information in an otherwise isolated environment. Astronauts need to be entirely self-sustaining to survive. Having unlimited access to as much information as possible is the best way to ensure they can meet any challenges they may face up there, and this is where utilizing AI in space exploration can be life-saving.

The Subaru Telescope

The Subaru Telescope is a 26-foot Telescope on the summit of Maunakea, an island in Hawaii.

The telescope maps the observable universe, and from there, scientists use a new machine-learning technology called AI SWIMMY (Subaru WIde-field Machine-learning anoMalY).

This AI space technology is used for “anomaly detection”, which is the process of identifying rare events that create patterns. They allow scientists to predict where pivotal cosmological events like supernovas are likely to occur.

Having the foresight to determine where these events will take place gives scientists the opportunity to develop the means of observing phenomena like black holes. With the application of AI, many are confident that we will begin to understand more about the life cycle and our solar system like never before.

Subaru Telescope Observatory- Mauna Kea Hawaii

How is AI Being Used to Identify Black Holes?

Some scientists are employing AI space technology to determine the likeliest location of black holes. So little is known about this elusive phenomenon that identifying where they are is the first step in unlocking their mysteries. Until that day comes, we have AI to thank for helping a team of Nobel Award-winning scientists create the most accurate rendition of a black hole to date.

Currently, there is a contradiction between the known laws of gravity— or the general theory of relativity—and the behaviour of particles in theoretical physics—or Quantum Mechanics. Studying black holes provides scientists with the opportunity to solve a centuries-long issue in the field of theoretical physics which may explain the very nature of reality.

Space Rovers on Mars

Mars rovers are vehicles that explore the surface of Mars and send feedback to Earth via a high-gain antenna (HGA). They are equipped with cameras that allow engineers to drive them through the Martian surface. These rovers use AI to differentiate objects in the planet’s environment. Similar mechanisms are already being used here on earth to capture pests in order to reduce the number of pesticides used in crops.

Newer rovers are being designed to have legs that hop rather than traditional wheels, so scientists can analyze previously unexplored terrain like mountains. AI algorithms are needed to calculate these movements.

Artificial Intelligence in Space is Saving Money Through Space Rovers

Utilizing AI in space exploration also makes sense from a monetary perspective because it is one of the best ways to ensure the protection of such expensive assets as spacecraft and unmanned rovers. Building autonomous rovers that learn to steer away from hazardous features like craters is a good way to prevent these investments from getting lost or damaged. The more sophisticated and complex these remote explorers become, the more expensive and time-consuming they are to build, which makes the application of AI all the more important.

The Search for Life in the Galaxy Using AI

The homepage for the SETI Institute is devoted to a single question: where will we be when we find life beyond Earth?

SETI as we know it today has been in operation since the early 1980s, and to date, we have yet to find verifiable proof of a signal from another world.

Most of what SETI picks up is a mixture of natural and man-made sounds. AI and machine learning can sift through data much faster than would otherwise be possible. As AI becomes more refined, it can recognize what is natural and what is man-made so that it may isolate any anomalous data for further study.

As one astronomer from Manchester put it, the possibility of a signal from another world is like a needle in the proverbial haystack, so they’re teaching AI to remove all the hay leaving only the needle behind. 

Silhouette of a couple looking up at the stars

IC Inspiration

Although the idea of AI and space exploration may sound like the stuff of science fiction, the applications of AI in space exploration are far from fantasy.

Deep learning algorithms are already teaching satellites used in space to avoid each other along with harmful debris. Using AI space technology to prevent a collision like this is paramount to global security because so much of our global economy relies on satellites.

According to NASA, something extremely fast that was observed to have a speed boost entered our solar system in 2017. We received our first interstellar visitor in the shape of a 300-foot-long cylindrical object. Scientists were quick to dub this strange object Oumuamua which means “a messenger from afar arriving first.”

While professional and public opinions differ on the fundamental nature of the object, Oumuamua undeniably displayed some inexplicable characteristics on its voyage through our section of space. The learning algorithm developed by SETI in recent years was able to find new signals when sifting through old data.

Just what would we be able to discover if this new form of AI were applied to some of the original data of our interstellar visitor? We’ll probably find out soon enough with more motivational news!

Alex Nagel is a content writer and writing tutor with a degree in English literature. He combines his academic research skills with his training in critical thinking to provide valuable news insights.

Science

Commercial Hypersonic Travel Can Have You Flying 13,000 Miles In 10 Minutes!

Published

on

jet plane flying overseas by way of commercial hypersonic air travel.
Commercial Hypersonic Travel Can Have You Flying 13,000 Miles In 10 Minutes!

If engineers start up a hypersonic engine at the University of Central Florida (UCF) and you’re not around to hear it, does it make a sound?

Hypersonic travel is anything that travels by at least 5x more than the speed of sound. A team of aerospace engineers at UCF have created the first stable hypersonic engine, and it can have you travelling across the world at 13,000 miles per hour!

Compared to the 575 mph a typical jet flies, commercial hypersonic travel is a first-class trade-off anybody would be willing to make.

In fact, a flight from Tampa, FL to California would take nearly 5 hours on a typical commercial jet; whereas, with a commercial hypersonic aircraft, it will only take 10 minutes.

So here’s the question: When can we expect commercial hypersonic air flights?

When we stop combusting engines and start detonating them! With a little background information, you’ll be shocked to know why.

Challenges and Limitations of Commercial Hypersonic Travel

The challenge with commercial hypersonic air travel is that maintaining combustion to keep the movement of an aircraft going in a stable way becomes difficult. The difficulty comes from both the combustion and aerodynamics that happens in such high speeds.

What Engineering Challenges Arise in Controlling and Stabilizing Hypersonic Aircraft at Such High Speeds?

Combustion is the process of burning fuel. It happens when fuel mixes with air, creating a reaction that releases energy in the form of heat. This mixture of air and fuel create combustion, and combustion is what generates the thrust needed for the movement of most vehicles.

But hypersonic vehicles are quite different. A combustion engine is not very efficient for vehicles to achieve stable hypersonic speeds. For a hypersonic aircraft to fly commercially, a detonation engine is needed.

Detonation can thrust vehicles into much higher speeds than combustion, so creating a detonation engine is important for commercial hypersonic air travel. Detonation engines were thought of as impossible for a very long time, not because you couldn’t create them, but because stabilizing them is difficult.

On one hand, detonation can greatly speed up a vehicle or aircraft, but on the other hand, both the power and the speed it creates makes stabilizing the engine even harder.

a lit candle with a cloud of smoke and a lit candle showing comparison between conventional combustion with that of hypersonic travel.
Combustion vs Detonation

How Do Aerodynamic Forces Impact the Design and Operation of Hypersonic Vehicles?

Aerodynamics relates to the motion of air around an object—in this case, an aircraft. As you can imagine, friction between an aircraft and the air it travels through generates a tremendous amount of heat. The faster the vehicle, the more heat created.

Commercial hypersonic vehicles must be able to manage the heat created at hypersonic speeds to keep from being damaged altogether.

Hypersonic aircraft do exist, but only in experimental forms such as in military application. NASA’s Hyper-X program develops some of these vehicles, one of which is the X-43A which could handle hypersonic speeds of Mach 6.8 (6.8x faster than the speed of sound).

Mach Number RangeName
1.0 MachSonicExactly the seed of sound.
1.2-5 MachSupersonicFaster than the speed of sound, characterized by shock waves.
>5.0HypersonicMore than 5x speed of sound, with extreme aerodynamic heating.
Description of Mach levels

But vehicles for commercial hypersonic air travel is still a work in progress

Engineers say that we will have these vehicles by 2050, but it may even be sooner that that. Here’s why.

Future Prospects and Developments in Hypersonic Travel

The worlds first stable hypersonic engine was created back in 2020 by a team of aerospace engineers at UCF, and they have continued to refine the technology since. This work is revolutionizing hypersonic technology in a way that had been thought of as impossible just a few years ago.

To create a stable engine for commercial hypersonic air travel, an engine must first be created that can handle detonation, but not only that, this engine must actually create more detonations while controlling.

This is because in order to achieve hypersonic speeds and then keep it at that level, there needs to be repeated detonations thrusting the vehicle forward.

The development at UCF did just that. They created a Rotating Detonation Engine (RDE) called the HyperReact.

What Technological Advancements are Driving the Development of Commercial Hypersonic Travel?

When combustion happens, a large amount of energy creates a high-pressure wave known as a shockwave. This compression creates higher pressure and temperatures which inject fuel into the air stream. This mixture of air and fuel create combustion, and combustion is what generates the thrust needed for a vehicles movement.

Rotating Detonation Engines (RDEs) are quite different. The shockwave generated from the detonation are carried to the “test” section of the HyperReact where the wave repeatedly triggers detonations faster than the speed of sound (picture Wile E. Coyote lighting up his rocket to catch up to Road Runner).

Theoretically, this engine can allow for hypersonic air travel at speeds of up to 17 Mach (17x the speed of sound).

hypersonic travel engine schematics by UCF
Schematic diagram of the experimental HyperReact prototype- University of Central Florida

Hypersonic technology with the development of the Rotating Detonating Engine will pave the way for commercial hypersonic air travel. But even before that happens, RED engines will be used for space launches and eventually space exploration.

NASA has already begun testing 3D-printed Rotating Detonating Rocket Engines (RDRE) in 2024.

How Soon Can We Expect Commercial Hypersonic Travel to Become a Reality?

Since we now have the worlds first stable hypersonic engine, the worlds first commercial hypersonic flight won’t be far off. Professor Kareem Ahmed, UCF professor and team lead of the experimental HyperReact prototype, say’s its very likely we will have commercial hypersonic travel by 2050.

Its important to note that hypersonic air flight has happened before, but only in experimental form. NASA’s X-43A aircraft flew for nearly 8,000 miles at Mach 10 levels. The difference is that the X-43A flew on scramjets and not Rotating Detonation Engines (RDEs).

Scramjets are combustion engines also capable of hypersonic speeds but, which are less efficient than Rotating Detonation Engines (RDEs) because they rely on combustion, not continuous detonation.

This makes RDE’s the better choice for commercial hypersonic travel, and it explains why NASA has been testing them for space launches.

One thing is certain:

We can shoot for the stars but that shot needs to be made here on Earth… If we can land on the moon, we’ll probably have commercial hypersonic travel soon.

Clouds spelling out UCF and jet plane flying by way of commercial hypersonic air travel

IC INSPIRATION

The first successful aviation flight took place 26 years after the first patented aviation engine was created; and the first successful spaceflight happened 35 years after the first successful rocket launch.

If the world’s first stable hypersonic engine was created in 2020, how long after until we have the world’s first Mach 5+ commercial flight?

1876-1903Nicolaus Otto developed the four-stroke combustible engine in 1876 that became the basis for the Wright brothers performing the first flight ever in 1903.
1926-1961Robert H. Goddard’s first successful rocket launch in 1926 paved way for the first human spaceflight by Yuri Gagarin in 1961
2020-2050The first stable RDE was created in 2020 and history is in the making!

Shout out to Professor Kareem Ahmed and his team at UCF. They’ve set the precedent for history in the making.

Imagine travelling overseas without the long flight and difficult hauls, or RDREs so great, they reduce costs and increase the efficiency of space travel. When time seems to be moving fast; hypersonic speeds is something I think everyone can get behind.

Would you like to know about some more amazing discoveries? Check out the largest ocean in the universe!

Continue Reading

Motivational

3D Printed Organs Save Woman’s Life and Accidentally Pave Way for Biology-Powered Artificial Intelligence

Published

on

Women showing a heart symbol with her hands in front of 3d printed organs

A Great Advancement for 3D Printed Organs

3D printing in hospitals is nothing new, but for the first time in history, a woman received a 3D printed windpipe that became a fully functional without the need for immunosuppressants.

Immunosuppressants are used during organ transplants to keep the body from attacking the organ that it see’s as foreign. This means that the organ the woman received was organic and personalized for her, as if she had it her entire life.

This mind-blowing news shows that we are now closer than ever to being able to create full-scale, functional, and complicated 3D printed organs like a heart or lung.

But what about creating a brain?

3D Printing and Organoid Intelligence

Organoid Intelligence, or OI, is an emerging field of study that is focused on creating bio-computers by merging AI with real brain cells called organoids. Organoids are miniature and simplified versions of organs grown in a lab dish. They mimic some of the functions of fully grown organs, like brains. The idea behind OI is that by increase the cells organoids contain, they may begin to function like fully grown brains, and can then be used alongside computers to enhance Artificial Intelligence.

It turns out that the world’s first 3D printed windpipe was so successful that we are now closer than ever to creating the world first organoid intelligent bio-computer.

Here’s why.

The World’s First 3D Printed Windpipe

Transplant patients usually have to take a long course of immunosuppressants that help the body accept the organ. The body see’s the organ as foreign, and so the immune system begins to attack the new organ, which can lead to more complicated health problems.

The woman in her 50’s who received the 3D printed windpipe did so without any immunosuppressants. In just 6 months after the operation, the windpipe healed and began to form blood vessels, and of course, more cells.

The current goal of scientists in the field of Organoid Intelligence is to increase organoids from 100,000 cells to 10 million, and this begs the question:

Can 3D printing help build bio-computers by creating better organoids?

Can 3D Printing Help Build Bio-Computers?

The worlds first 3D printed windpipe shows that advances in 3D printing can create better functioning organs, and this implies that we can also create more intricate organoids to help in the field of Organoid Intelligence and eventually create bio-computers.

Its important to understand the distinction between 3D printing an organ and printing something like a tool or musical instrument.

The difference between printing an organ and printing a non-biological structure depends on the ink being used in the 3D printer.

3D printing non-organic structures will require ink that can be made from plastic, plastic alternatives like PLA, metal, and ceramics. On the other hand, 3D printed organs are made from ink called “bio-inks” that are a mixture of living cells and biocompatible substances like the ones mentioned above.

In the case of the 3D printed windpipe, the ink used was partly formed from the stem and cartilage cells collected from the woman’s own nose and ear. It was because of this bio-ink that the woman’s body did not reject the organ.

The Problem With 3D Printed Organs

Organs created with bioprinting need to function like real organs for the body to safely use them, and this does not happen right away.

The 3D printed organs need to go beyond just a printed structure and become living. They need to form tissues and cells that help create biological functionality, and forming these cells take time.

The problem with 3D bioprinting is that the ink used for the printer needs to be effective at doing this, and if it is not, the organ may not stay functional.

The ink used for the 3D-printed windpipe was made from part bio-ink and part polycaprolactone (PCL), a synthetic polyester material.

PCL is a used in the 3D ink for the purposes of maintain the structure of the windpipe, while the bio-ink is used to help the 3D printed organ to become fully biological in time so that the body can use it.

The PCL maintains the structure while the bio-ink does it’s thing.

The problem with PCL is that it is biodegradable and won’t last forever. In fact, doctors don’t expect the 3D-printed windpipe to last more than five years.

The Solution is Better Bio-ink

The 3D printed windpipe was not just made using PCL, but it contained bio-ink made from living cells too. The hope is that the living cells in the 3D printed organ—which came from the bio-ink—will assist the patient’s body in creating a fully functional windpipe to replace the PCL’s function.

If the organ begins to form cells and tissue by itself, then the function of PCL will be replaced by the biological function of the organ that is growing.

The organ becomes real!

Bio-Ink helps the 3D printed organ mimic it’s natural environment of cells and eventually become a real organ.

3D Printing Organs Will Save Lives

Every year, thousands of people need a lifesaving organ transplant. These transplants cost hundreds of thousand of dollars, and many people who need them don’t make it passed the waiting list.

3D Printing organs could give people the incredible opportunity to receive the help they need when they need it, saving thousands of lives annually, and millions of lives in the long run.

As advances are made in 3D Bioprinting, they will also be made in areas of Organoid and Artificial Intelligence, which shows that the progress being made in one place will once again shine its way to another.

3d printed organ. A brain being created by 3d printers.

IC Inspiration:

If we can create better forms of bio-ink and produce fully functional organs using 3D printing, we will fundamentally change the entire health care system.

17 people die every single day waiting for an organ transplant, many of whom can’t afford the transplant in the first place.

The biggest hope in the world for everyone that is affected by this is that organs can be produced when they are needed, ending the transplant shortage and saving the incredible lives of millions of people in the future.

We have seen from this story that personalized organs made from a patients own cells can stop the bodies rejection of organs. This shows us that there will come a time when there will be no need for immunosuppressants therapy.

Even more amazing is that doctors use 3D printing to practice performing a surgery so that they can sharpen their skills before the surgery. This also helps them find better pathways for performing the surgery.

Think about it… If you can’t use a real organ to practice on, then 3D organs are the next best thing.

The production of organs, the irrelevancy of immunosuppressants, and more efficient surgery will eventually drive down the prices of transplants, and 3D printing organs in the future will not only save lives, but it will also increase the quality of those lives afterwards.

That is the sort of world we can create. It’s amazing to think of all the good that is being done right here, right now.

Continue Reading

Science

Sora AI is Every Content Creators Dream. Its Almost Here!

Published

on

OpenAI’s Sora

Sora is the Japanese word for sky, our blue expanse that is often associated with limitless dreams and possibilities.

You may have heard that OpenAI is also releasing an AI video generator called Sora AI. With It’s fantastical visuals and life-like video, it’s without a doubt the top 5 AI technologies in 2024.

OpenAI recently launched Sora’s first short video, “Air Head”, and if it proves anything, its that Sora is every content creator’s dream turned reality.

But if you’re not convinced, perhaps this video might help. Here’s a little game called, “can you spot the AI video”?

How Can Sora AI Help Content Creators?

Video producers, film makers, animator, visual artists, and game developers all have one thing in common: They are always looking for the next big thing in creative expression. Sora AI is a tool that can greatly enhance the ability content creators have to fuel their imagination and connect with their audiences.

A misconception is that AI is going to replace human artists, videographers, and animators. But if Sora’s first short film has shown anything, its that a team was still needed to create the story, narrate the scenes, and edit the videos to create the final production.

Sora won’t replace artists; it will equip them with tools to express their artistry in different ways.

Sora’s First Short Film

Auteur and Toronto-based multimedia company, Shy Kids, was granted early access to Sora AI. Shy Kids is among the few granted early access to the AI video generator for the sake of testing and refining it before launch. The video the artists generated using Sora AI is called “Air Head”.

Pretty mind-blowing to think that one day, we might be able to create an entire movie with the main character as a balloon. Think of the comedies we can create.

How Does Sora AI Work?

Sora’s first short film “Air Head” shows that Sora AI is the most advanced AI-powered video generator tool in history. Sora creates realistic and detailed 60-second videos of any topic, realistic or fantasy. It only needs a prompt from the user to build on existing information and develop whole new worlds.

What We Know So Far

Sora AI is a new technology with limited access. There’s a strategic reason to limit information of a new technology, and it’s to manage the publics expectations while polishing the final product. Sora is a very powerful tool. It might be necessary to have strong safeguards and build guidelines before releasing it. Here’s what we know so far.

Sora Release Date

OpenAI has not provided any specific release date for public availability or even a waiting list. However, many sources indicate that it may be released in the second half of 2024. Currently, Sora AI is only being made available to testers called “red teamers”, and a select group of designers—like Shy Kids— have been granted access.

Sora Price

Open AI has not yet released a price for Sora AI and has made no comment on whether there will be a free version like its other AI models. Based on other AI text-to-video generators, its likely that there won’t be a free version, and that Sora will offer a tiered subscription model that caters to users who want to dish out videos regularly.

There is also a possibility of a credit-based system, similar to its competitor RunwayML. A credit-based system is where users purchase credits, and each credit is used for a specific task related to generating a video.

Sora’s Video Length

OpenAI has said Sora can generate videos of up to a minute long with visual consistency. Scientific America states that users will be able to increase the length of the video by adding additional clips to a sequence. Sora’s first short film “Air Head” ran for a minute and twenty seconds, which indicates that Sora’s video length can be anywhere between 60-90 seconds.

Sora’s Video Generation Time

OpenAI has not revealed how long it will take Sora AI to generate a video; however, Sora will use NVIDIA H100 AI GPUs. These are GPUs designed to handle complex artificial intelligence tasks. According to estimates provided by Factorial Funds, these GPUs will allow Open AI’s Sora to create a one minute video in approximately twelve minutes.

How is Sora AI Different from Other Video Generators?

Many text-to-video generators have trouble maintaining visual coherency. They will often add visuals that are completely different for one another for each scene. This requires the videos to be further edited. In some cases, it takes longer to create the video you want by using AI than it does by creating it yourself.

Sora AI seems to surpass other text-to-video generators in the level of detail and realism it creates. It has a deeper understanding of how the physical world operates.

It Brings Motion to Pictures

Another feature that Sora AI has is its still-life photo prompts. Sora will be able to take a still-life photo, such as a portrait, and bring it to life by adding realistic movement and expression to the subject. This means that you can generate images using OpenAI’s DALL. E model, and then prompt it with the desired text of what you would like the image to do.

This is like something out of Harry Potter. One of the biggest worries is that Sora AI will be able to depict someone saying or doing something they never did. I don’t think the world’s ready for another Elon Musk Deepfake.

Will Sora AI Undermine Our Trust In Videos?

There are over 700 AI-managed fake news sites across the world. OpenAI is already working with red teamers—experts in areas of false content—to help prevent the use of Sora AI in a way that can undermine our trust in videos.

Detection classifiers will play a big role in the future of AI. Among these detection classifiers are tools that can detect AI in writing, and content credentials that show whether an image was made using AI within the contents metadata.

AI image generators like Adobe Firefly are already using content credentials for their images.

Why do Sora AI Videos Look So Good?

Sora AI generates it’s videos using ‘spacetime patches’. Spacetime patches are small segments of video that allow Sora to analyze complex visual information by capturing both appearance and movement in an effective way. This creates a more realistic and dynamic video, as opposed to other AI video generators that have fixed-size inputs and outputs.

One comment said Sora AI videos are like dreams, only clearer… That’s not a bad way to put it. Afterall, dreams are like movies our brains create, and anyone who increases their REM sleep will understand. But speaking of movies, how will Sora AI affect Hollywood?

Can Sora AI Replace Movies?

As amazing as OpenAI’s text-to-video generator is, it can’t replace actors and use them in a prolonged storyline, but it can help producers create some fantastic movies. Sora AI can be used to create pre-visuals, concept art, and help producers scout potential locations.

Pre-visualization: Sora can turn scripts into visual concepts to help both directors and actors plan complex shots.

Concept Art Creation: It can be used to generate unique characters and fantastical landscapes which can then be incorporated into the design of movies.

Location Scouting: Using the prompt description, Open AI’s Sora can expand on location options, and even create locations that are not physically realizable. An example would be a city protruding from a planet floating around in space (I sense the next Dune movie here).

Sora AI
City protruding from a planet floating around in space.

IC INSPIRATION

Content creators have a story to tell, and fantastic content is often the product of a fantastic mind. Sora could transform how we share inspiring stories.

Just imagine for a moment how long it took to conceptualize the locations and characters needed to create a movie like The Lord of the Rings. How many sketches, paintings and 3d models they had to create until they got their “aha moment”, and finally found the perfect look for the movie.

I wonder how much time Sora AI can save film and content creators, and with it, how much money. If it is truly as intuitive as its appearing to be, then it could revolutionize the work of filmmakers, video creators, game developers, and even marketers.

A lot of campaigns are too hard to visualize. Take Colossal Biosciences as an example. They are a company that has created a de-extinction project to bring back the Woolly Mammoth. How on earth do you conceptualize the process of de-extinction in a campaign video without spending an enormous amount of money?

Sora could be just what the doctor ordered.

Continue Reading

Trending