We’ve got some amazing neuroscience news for you! A new study of the brain has led scientists down a path to better understanding the early signs and effective treatment for Alzheimer’s disease, and believe it or not, it all started by them studying our brains internal compass.
Have you ever found yourself exploring a new part of town and suddenly losing track of which way to go? It happens to the best of us. But fear not because our brains have this amazing feature called the internal compass that helps us find our way, just like a magical guide showing us the right path.
The researchers wanted to gain a better understanding of how visual information affects our internal compass. As virtual reality technology starts to gain more and more traction, this research can be extremely valuable in giving us a better understanding of the effects it may have on us.
In order to dive deeper into the effects of virtual experiences and particularly how they may make us feel disoriented, the scientists took mice on a virtual adventure. They exposed them to a special virtual world that made the mice feel a bit disoriented, and while the mice went exploring, the researchers tracked the activity in their brains.
What they discovered was a phenomenon they called “network gain.” Network gain is like a reset button that quickly helps us get back on track when we’re feeling confused. Imagine that! Our brains have a secret mechanism to reorient themselves and save the day in puzzling situations, eventually consolidating our sense of direction.
For some dedicated researchers, the virtual world isn’t just a game—it’s a scientific puzzle waiting to be solved.
The scientists are convinced that better understanding our internal compass and navigation system could lead to improved outcomes for individuals affected by Alzheimer’s disease because the symptoms of Alzheimer’s disease include feeling lost and disoriented.
Through the research, scientists are now further studying the significant implications for the disease, particularly how we can detect its early signs and produce effective treatments for it.
These incredible findings have sparked the curiosity of scientists who are currently developing new models to dig deeper into how all these brain mechanisms work together. They are on a mission to help those with Alzheimer’s by continuing to unlock the secrets of our brain’s internal compass; as if working on a roadmap to a brighter future.
IC INSPIRATION
Have you ever pondered the incredible complexity of our brains? It’s truly amazing to consider that it might be the most intricate thing in the entire universe. No wonder humanity is endlessly fascinated by the quest to understand and unravel its mysteries.
In the United States alone, approximately 6.5 million people are currently grappling with Alzheimer’s disease, and worldwide, that number is estimated to be around 55 million. As technology advances, the study of the brain becomes more and more important. Just imagine the potential if we could find a way to effectively treat this devastating condition that currently lacks a cure.
Neuroscience has come a long way, thanks to amazing advancements in technology. Scientists like Mark Brandon and Zaki Ajabi from McGill University and Harvard University have been using cutting-edge tools to explore questions that were once unimaginable, giving us a sense of direction by studying our literal sense of direction.
It’s like they’re pushing the boundaries of what we thought was possible.
Thanks to the ongoing research of these people, there is hope that someday very soon, mental illnesses will become relics of the past, much in the same way that life-long paralysis from nerve injuries will. We may live in a future where these things no longer hold sway over our lives, leading us to a happier and more fulfilling existence.
The possibilities are truly awe-inspiring, and it is through dedicated scientific exploration that we inch closer to achieving this remarkable goal.
Carlos is a content developer with a background in communications and business management. He is experienced in journalistic research and writing, as well as content creation, such as video, audio, photography, and script.
Commercial Hypersonic Travel Can Have You Flying 13,000 Miles In 10 Minutes!
If engineers start up a hypersonic engine at the University of Central Florida (UCF) and you’re not around to hear it, does it make a sound?
Hypersonic travel is anything that travels by at least 5x more than the speed of sound. A team of aerospace engineers at UCF have created the first stable hypersonic engine, and it can have you travelling across the world at 13,000 miles per hour!
Compared to the 575 mph a typical jet flies, commercial hypersonic travel is a first-class trade-off anybody would be willing to make.
In fact, a flight from Tampa, FL to California would take nearly 5 hours on a typical commercial jet; whereas, with a commercial hypersonic aircraft, it will only take 10 minutes.
So here’s the question: When can we expect commercial hypersonic air flights?
When we stop combusting engines and start detonating them! With a little background information, you’ll be shocked to know why.
Challenges and Limitations of Commercial Hypersonic Travel
The challenge with commercial hypersonic air travel is that maintaining combustion to keep the movement of an aircraft going in a stable way becomes difficult. The difficulty comes from both the combustion and aerodynamics that happens in such high speeds.
What Engineering Challenges Arise in Controlling and Stabilizing Hypersonic Aircraft at Such High Speeds?
Combustion is the process of burning fuel. It happens when fuel mixes with air, creating a reaction that releases energy in the form of heat. This mixture of air and fuel create combustion, and combustion is what generates the thrust needed for the movement of most vehicles.
But hypersonic vehicles are quite different. A combustion engine is not very efficient for vehicles to achieve stable hypersonic speeds. For a hypersonic aircraft to fly commercially, a detonation engine is needed.
Detonation can thrust vehicles into much higher speeds than combustion, so creating a detonation engine is important for commercial hypersonic air travel. Detonation engines were thought of as impossible for a very long time, not because you couldn’t create them, but because stabilizing them is difficult.
On one hand, detonation can greatly speed up a vehicle or aircraft, but on the other hand, both the power and the speed it creates makes stabilizing the engine even harder.
How Do Aerodynamic Forces Impact the Design and Operation of Hypersonic Vehicles?
Aerodynamics relates to the motion of air around an object—in this case, an aircraft. As you can imagine, friction between an aircraft and the air it travels through generates a tremendous amount of heat. The faster the vehicle, the more heat created.
Commercial hypersonic vehiclesmust be able to manage the heat created at hypersonic speeds to keep from being damaged altogether.
Hypersonic aircraft do exist, but only in experimental forms such as in military application. NASA’s Hyper-X program develops some of these vehicles, one of which is the X-43A which could handle hypersonic speeds of Mach 6.8 (6.8x faster than the speed of sound).
Mach Number Range
Name
1.0 Mach
Sonic
Exactly the seed of sound.
1.2-5 Mach
Supersonic
Faster than the speed of sound, characterized by shock waves.
>5.0
Hypersonic
More than 5x speed of sound, with extreme aerodynamic heating.
Description of Mach levels
But vehicles for commercial hypersonic air travel is still a work in progress
Engineers say that we will have these vehicles by 2050, but it may even be sooner that that. Here’s why.
Future Prospects and Developments in Hypersonic Travel
The worlds first stable hypersonic engine was created back in 2020 by a team of aerospace engineers at UCF, and they have continued to refine the technology since. This work is revolutionizing hypersonic technology in a way that had been thought of as impossible just a few years ago.
To create a stable engine for commercial hypersonic air travel, an engine must first be created that can handle detonation, but not only that, this engine must actually create more detonations while controlling.
This is because in order to achieve hypersonic speeds and then keep it at that level, there needs to be repeated detonations thrusting the vehicle forward.
The development at UCF did just that. They created a Rotating Detonation Engine (RDE) called the HyperReact.
What Technological Advancements are Driving the Development of Commercial Hypersonic Travel?
When combustion happens, a large amount of energy creates a high-pressure wave known as a shockwave. This compression creates higher pressure and temperatures which inject fuel into the air stream. This mixture of air and fuel create combustion, and combustion is what generates the thrust needed for a vehicles movement.
Rotating Detonation Engines (RDEs) are quite different. The shockwave generated from the detonation are carried to the “test” section of the HyperReact where the wave repeatedly triggers detonations faster than the speed of sound (picture Wile E. Coyote lighting up his rocket to catch up to Road Runner).
Theoretically, this engine can allow for hypersonic air travel at speeds of up to 17 Mach (17x the speed of sound).
Hypersonic technology with the development of the Rotating Detonating Engine will pave the way for commercial hypersonic air travel. But even before that happens, RED engines will be used for space launches and eventually space exploration.
NASA has already begun testing 3D-printed Rotating Detonating Rocket Engines (RDRE) in 2024.
How Soon Can We Expect Commercial Hypersonic Travel to Become a Reality?
Since we now have the worlds first stable hypersonic engine, the worlds first commercial hypersonic flight won’t be far off. Professor Kareem Ahmed, UCF professor and team lead of the experimental HyperReact prototype, say’s its very likely we will have commercial hypersonic travel by 2050.
Its important to note that hypersonic air flight has happened before, but only in experimental form. NASA’s X-43A aircraft flew for nearly 8,000 miles at Mach 10 levels. The difference is that the X-43A flew on scramjets and not Rotating Detonation Engines (RDEs).
Scramjets are combustion engines also capable of hypersonic speeds but, which are less efficient than Rotating Detonation Engines (RDEs) because they rely on combustion, not continuous detonation.
This makes RDE’s the better choice for commercial hypersonic travel, and it explains why NASA has been testing them for space launches.
One thing is certain:
We can shoot for the stars but that shot needs to be made here on Earth… If we can land on the moon, we’ll probably have commercial hypersonic travel soon.
IC INSPIRATION
The first successful aviation flight took place 26 years after the first patented aviation engine was created; and the first successful spaceflight happened 35 years after the first successful rocket launch.
If the world’s first stable hypersonic engine was created in 2020, how long after until we have the world’s first Mach 5+ commercial flight?
1876-1903
Nicolaus Otto developed the four-stroke combustible engine in 1876 that became the basis for the Wright brothers performing the first flight ever in 1903.
1926-1961
Robert H. Goddard’s first successful rocket launch in 1926 paved way for the first human spaceflight by Yuri Gagarin in 1961
2020-2050
The first stable RDE was created in 2020 and history is in the making!
Shout out to Professor Kareem Ahmed and his team at UCF. They’ve set the precedent for history in the making.
Imagine travelling overseas without the long flight and difficult hauls, or RDREs so great, they reduce costs and increase the efficiency of space travel. When time seems to be moving fast; hypersonic speeds is something I think everyone can get behind.
3D printing in hospitals is nothing new, but for the first time in history, a woman received a 3D printed windpipe that became a fully functional without the need for immunosuppressants.
Immunosuppressants are used during organ transplants to keep the body from attacking the organ that it see’s as foreign. This means that the organ the woman received was organic and personalized for her, as if she had it her entire life.
This mind-blowing news shows that we are now closer than ever to being able to create full-scale, functional, and complicated 3D printed organs like a heart or lung.
But what about creating a brain?
3D Printing and Organoid Intelligence
Organoid Intelligence, or OI, is an emerging field of study that is focused on creating bio-computers by merging AI with real brain cells called organoids. Organoids are miniature and simplified versions of organs grown in a lab dish. They mimic some of the functions of fully grown organs, like brains. The idea behind OI is that by increase the cells organoids contain, they may begin to function like fully grown brains, and can then be used alongside computers to enhance Artificial Intelligence.
It turns out that the world’s first 3D printed windpipe was so successful that we are now closer than ever to creating the world first organoid intelligent bio-computer.
Here’s why.
The World’s First 3D Printed Windpipe
Transplant patients usually have to take a long course of immunosuppressants that help the body accept the organ. The body see’s the organ as foreign, and so the immune system begins to attack the new organ, which can lead to more complicated health problems.
The woman in her 50’s who received the 3D printed windpipe did so without any immunosuppressants. In just 6 months after the operation, the windpipe healed and began to form blood vessels, and of course, more cells.
The current goal of scientists in the field of Organoid Intelligence is to increase organoids from 100,000 cells to 10 million, and this begs the question:
Can 3D printing help build bio-computers by creating better organoids?
Can 3D Printing Help Build Bio-Computers?
The worlds first 3D printed windpipe shows that advances in 3D printing can create better functioning organs, and this implies that we can also create more intricate organoids to help in the field of Organoid Intelligence and eventually create bio-computers.
Its important to understand the distinction between 3D printing an organ and printing something like a tool or musical instrument.
The difference between printing an organ and printing a non-biological structure depends on the ink being used in the 3D printer.
3D printing non-organic structures will require ink that can be made from plastic, plastic alternatives like PLA, metal, and ceramics. On the other hand, 3D printed organs are made from ink called “bio-inks” that are a mixture of living cells and biocompatible substances like the ones mentioned above.
In the case of the 3D printed windpipe, the ink used was partly formed from the stem and cartilage cells collected from the woman’s own nose and ear. It was because of this bio-ink that the woman’s body did not reject the organ.
The Problem With 3D Printed Organs
Organs created with bioprinting need to function like real organs for the body to safely use them, and this does not happen right away.
The 3D printed organs need to go beyond just a printed structure and become living. They need to form tissues and cells that help create biological functionality, and forming these cells take time.
The problem with 3D bioprinting is that the ink used for the printer needs to be effective at doing this, and if it is not, the organ may not stay functional.
The ink used for the 3D-printed windpipe was made from part bio-ink and part polycaprolactone (PCL), a synthetic polyester material.
PCL is a used in the 3D ink for the purposes of maintain the structure of the windpipe, while the bio-ink is used to help the 3D printed organ to become fully biological in time so that the body can use it.
The PCL maintains the structure while the bio-ink does it’s thing.
The problem with PCL is that it is biodegradable and won’t last forever. In fact, doctors don’t expect the 3D-printed windpipe to last more than five years.
The Solution is Better Bio-ink
The 3D printed windpipe was not just made using PCL, but it contained bio-ink made from living cells too. The hope is that the living cells in the 3D printed organ—which came from the bio-ink—will assist the patient’s body in creating a fully functional windpipe to replace the PCL’s function.
If the organ begins to form cells and tissue by itself, then the function of PCL will be replaced by the biological function of the organ that is growing.
The organ becomes real!
Bio-Ink helps the 3D printed organ mimic it’s natural environment of cells and eventually become a real organ.
3D Printing Organs Will Save Lives
Every year, thousands of people need a lifesaving organ transplant. These transplants cost hundreds of thousand of dollars, and many people who need them don’t make it passed the waiting list.
3D Printing organs could give people the incredible opportunity to receive the help they need when they need it, saving thousands of lives annually, and millions of lives in the long run.
As advances are made in 3D Bioprinting, they will also be made in areas of Organoid and Artificial Intelligence, which shows that the progress being made in one place will once again shine its way to another.
IC Inspiration:
If we can create better forms of bio-ink and produce fully functional organs using 3D printing, we will fundamentally change the entire health care system.
17 people die every single day waiting for an organ transplant, many of whom can’t afford the transplant in the first place.
The biggest hope in the world for everyone that is affected by this is that organs can be produced when they are needed, ending the transplant shortage and saving the incredible lives of millions of people in the future.
We have seen from this story that personalized organs made from a patients own cells can stop the bodies rejection of organs. This shows us that there will come a time when there will be no need for immunosuppressants therapy.
Even more amazing is that doctors use 3D printing to practice performing a surgery so that they can sharpen their skills before the surgery. This also helps them find better pathways for performing the surgery.
Think about it… If you can’t use a real organ to practice on, then 3D organs are the next best thing.
The production of organs, the irrelevancy of immunosuppressants, and more efficient surgery will eventually drive down the prices of transplants, and 3D printing organs in the future will not only save lives, but it will also increase the quality of those lives afterwards.
That is the sort of world we can create. It’s amazing to think of all the good that is being done right here, right now.
Sora is the Japanese word for sky, our blue expanse that is often associated with limitless dreams and possibilities.
You may have heard that OpenAI is also releasing an AI video generator called Sora AI. With It’s fantastical visuals and life-like video, it’s without a doubt the top 5 AI technologies in 2024.
OpenAI recently launched Sora’s first short video, “Air Head”, and if it proves anything, its that Sora is every content creator’s dream turned reality.
But if you’re not convinced, perhaps this video might help. Here’s a little game called, “can you spot the AI video”?
How Can Sora AI Help Content Creators?
Video producers, film makers, animator, visual artists, and game developers all have one thing in common: They are always looking for the next big thing in creative expression. Sora AI is a tool that can greatly enhance the ability content creators have to fuel their imagination and connect with their audiences.
A misconception is that AI is going to replace human artists, videographers, and animators. But if Sora’s first short film has shown anything, its that a team was still needed to create the story, narrate the scenes, and edit the videos to create the final production.
Sora won’t replace artists; it will equip them with tools to express their artistry in different ways.
Sora’s First Short Film
Auteur and Toronto-based multimedia company, Shy Kids, was granted early access to Sora AI. Shy Kids is among the few granted early access to the AI video generator for the sake of testing and refining it before launch. The video the artists generated using Sora AI is called “Air Head”.
Pretty mind-blowing to think that one day, we might be able to create an entire movie with the main character as a balloon. Think of the comedies we can create.
How Does Sora AI Work?
Sora’s first short film “Air Head” shows that Sora AI is the most advanced AI-powered video generator tool in history. Sora creates realistic and detailed 60-second videos of any topic, realistic or fantasy. It only needs a prompt from the user to build on existing information and develop whole new worlds.
What We Know So Far
Sora AI is a new technology with limited access. There’s a strategic reason to limit information of a new technology, and it’s to manage the publics expectations while polishing the final product. Sora is a very powerful tool. It might be necessary to have strong safeguards and build guidelines before releasing it. Here’s what we know so far.
Sora Release Date
OpenAI has not provided any specific release date for public availability or even a waiting list. However, many sources indicate that it may be released in the second half of 2024. Currently, Sora AI is only being made available to testers called “red teamers”, and a select group of designers—like Shy Kids— have been granted access.
Sora Price
Open AI has not yet released a price for Sora AI and has made no comment on whether there will be a free version like its other AI models. Based on other AI text-to-video generators, its likely that there won’t be a free version, and that Sora will offer a tiered subscription model that caters to users who want to dish out videos regularly.
There is also a possibility of a credit-based system, similar to its competitor RunwayML. A credit-based system is where users purchase credits, and each credit is used for a specific task related to generating a video.
Sora’s Video Length
OpenAI has said Sora can generate videos of up to a minute long with visual consistency. Scientific America states that users will be able to increase the length of the video by adding additional clips to a sequence. Sora’s first short film “Air Head” ran for a minute and twenty seconds, which indicates that Sora’s video length can be anywhere between 60-90 seconds.
Sora’s Video Generation Time
OpenAI has not revealed how long it will take Sora AI to generate a video; however, Sora will use NVIDIA H100 AI GPUs. These are GPUs designed to handle complex artificial intelligence tasks. According to estimates provided by Factorial Funds, these GPUs will allow Open AI’s Sora to create a one minute video in approximately twelve minutes.
How is Sora AI Different from Other Video Generators?
Many text-to-video generators have trouble maintaining visual coherency. They will often add visuals that are completely different for one another for each scene. This requires the videos to be further edited. In some cases, it takes longer to create the video you want by using AI than it does by creating it yourself.
Sora AI seems to surpass other text-to-video generators in the level of detail and realism it creates. It has a deeper understanding of how the physical world operates.
It Brings Motion to Pictures
Another feature that Sora AI has is its still-life photo prompts. Sora will be able to take a still-life photo, such as a portrait, and bring it to life by adding realistic movement and expression to the subject. This means that you can generate images using OpenAI’s DALL. E model, and then prompt it with the desired text of what you would like the image to do.
This is like something out of Harry Potter. One of the biggest worries is that Sora AI will be able to depict someone saying or doing something they never did. I don’t think the world’s ready for another Elon Musk Deepfake.
Will Sora AI Undermine Our Trust In Videos?
There are over 700 AI-managed fake news sites across the world. OpenAI is already working with red teamers—experts in areas of false content—to help prevent the use of Sora AI in a way that can undermine our trust in videos.
Detection classifiers will play a big role in the future of AI. Among these detection classifiers are tools that can detect AI in writing, and content credentials that show whether an image was made using AI within the contents metadata.
AI image generators like Adobe Firefly are already using content credentials for their images.
Why do Sora AI Videos Look So Good?
Sora AI generates it’s videos using ‘spacetime patches’. Spacetime patches are small segments of video that allow Sora to analyze complex visual information by capturing both appearance and movement in an effective way. This creates a more realistic and dynamic video, as opposed to other AI video generators that have fixed-size inputs and outputs.
One comment said Sora AI videos are like dreams, only clearer… That’s not a bad way to put it. Afterall, dreams are like movies our brains create, and anyone who increases their REM sleep will understand. But speaking of movies, how will Sora AI affect Hollywood?
Can Sora AI Replace Movies?
As amazing as OpenAI’s text-to-video generator is, it can’t replace actors and use them in a prolonged storyline, but it can help producers create some fantastic movies. Sora AI can be used to create pre-visuals, concept art, and help producers scout potential locations.
Pre-visualization: Sora can turn scripts into visual concepts to help both directors and actors plan complex shots.
Concept Art Creation: It can be used to generate unique characters and fantastical landscapes which can then be incorporated into the design of movies.
Location Scouting: Using the prompt description, Open AI’s Sora can expand on location options, and even create locations that are not physically realizable. An example would be a city protruding from a planet floating around in space (I sense the next Dune movie here).
IC INSPIRATION
Content creators have a story to tell, and fantastic content is often the product of a fantastic mind. Sora could transform how we share inspiring stories.
Just imagine for a moment how long it took to conceptualize the locations and characters needed to create a movie like The Lord of the Rings. How many sketches, paintings and 3d models they had to create until they got their “aha moment”, and finally found the perfect look for the movie.
I wonder how much time Sora AI can save film and content creators, and with it, how much money. If it is truly as intuitive as its appearing to be, then it could revolutionize the work of filmmakers, video creators, game developers, and even marketers.
A lot of campaigns are too hard to visualize. Take Colossal Biosciences as an example. They are a company that has created a de-extinction project to bring back the Woolly Mammoth. How on earth do you conceptualize the process of de-extinction in a campaign video without spending an enormous amount of money?