Technology
You Can Now Use AI to Read Ancient Texts and Win $1 Million
Published
1 year agoon
AI Reads Ancient Texts
AI is reading ancient texts, opening up an inspirational new world of lost and hidden knowledge. There are volumes of unknown history, philosophy, science, and literature out there waiting for us to explore. But that can be nearly impossible to do when that knowledge is written on fragmented stone tablets or badly damaged scrolls and papyrus.
AI is taking this problem on and making amazing strides toward solving the issue, and there is currently a million-dollar cash prize for anyone who can decipher these ancient texts using AI.
How is AI Decoding Ancient Texts?
Three innovations have played a large role in how AI decodes ancient texts and unveils hidden information. They are multispectral imagery, virtual unwrapping, and an exciting new application that scientists have called the AI Historian.
- Multispectral Imagery: This technique filters an image down to highlight only specific wavelengths. When a document, written on delicate animal skin was cleaned off and reused, one layer of text was often on a different wavelength than another. Multispectral imagery makes it possible to virtually remove a layer of text to read what was written underneath it.
- Virtual Unwrapping: This approach involves taking CT scans of ancient texts. Multiple X-rays on all sides of an ancient scroll allow computer technology to create a digital 3D image of that scroll that can be recreated as a flat sheet, as it would appear when unrolled. This is especially critical for scrolls and other documents that can’t be opened without severe damage.
- The AI Historian: One of the most exciting releases is the AI Historian known as Ithaca. This online application allows scientists to enter ancient text directly into a website for analysis. The algorithms will examine the text and offer suggestions for missing sections of text, probabilities for the accuracy of those suggestions, and even predictions as to where and when the ancient text may have been written.
In spite of all these fantastic revolutions, some ancient writings remain elusive. Among these texts are The En-Gedi Scroll and the Herculaneum Scrolls. These ancient scrolls in particular proved very challenging to decipher before the advent of AI.
What are the Herculaneum Scrolls?
The Herculaneum scrolls are texts that had been buried in Herculaneum, an ancient city in Rome. The same volcanic eruption that covered the famous Italian city of Pompeii in 79 AD also buried the ancient city of Herculaneum. The volcano that did this is still active today and is known as Mount Vesuvius
Where Were the Herculaneum Scrolls found?
The Herculaneum scrolls were found under the remains of a luxurious villa, thought to have been the home of Julius Cesar’s father-in-law.
The scrolls lay underneath nearly 20 feet of volcanic ash and hardened mud until the villa was accidentally discovered by a farmer digging a well in 1750 AD. It was the very best of ironies that this centuries-long encasement both destroyed the scrolls and saved them at the same time.
The heat of the volcanic ash and the pressure of burial left the scrolls charred, carbonized, and badly compressed. Early attempts to unroll and read them resulted in the destruction of the scrolls. But had they not been buried; they likely would have disintegrated a long time ago. Although the scrolls couldn’t be opened, the writing on them still exists inside, making AI deciphering methods extremely important.
The En-Gedi Scroll
Dr. Brent Seales, a computer scientist at the University of Kentucky, has spent many years using the virtual unwrapping method to try to retrieve information hidden inside carbonized scrolls. He and his team had significant success with a scroll found in the remnants of an ancient synagogue in En-Gedi, Israel.
Like the scrolls at Herculaneum, it was charred and compressed from years of burial and could not be unrolled without severe damage. Using Dr. Seale’s methods, his team was able to determine that the scroll was one of the oldest-known copies of the book of Leviticus from the Biblical Old Testament.
What is the Difference Between the Herculaneum Scrolls and the En-Gedi Scrolls?
The Herculaneum scrolls proved harder to decipher because the ink used to write the text was carbon-based and comprised of charcoal and water. This made it nearly impossible to read on a papyrus that, with age and damage, was now the same color as the ink.
On the other hand, the En-Gedi scroll was written with lead-based ink that had a high metal content. This composition made the letters shine brightly in the CT scan images.
Dr. Brent Seales and the team realized that even slight differences in texture could make it clear where the ink exists on the paper. They called this ‘crackle’.
Now, all they needed was to determine how to use these texture changes to read the words that were formed.
The Vesuvius Challenge: Win 1 Million Dollars!
Dr. Seales and his team realized they would have a better chance of recovering all that ancient knowledge faster if they brought other minds and ideas to the table. So, they developed the Vesuvius Challenge.
This was a worldwide contest open to anyone who wanted to take a shot at this difficult challenge. The contest is organized into phases in which cash prizes are to be awarded to successive accomplishments, including the first person to successfully identify ink and the first to identify letters. Then the team released their already existing scans and code to the contestants.
It wasn’t long before nearly 1,500 contestants in various places were hard at work on the problem. Soon, one of those contestants was reading the first word on one of the scrolls: purple.
The ‘Purple’ Breakthrough
Luke Farritor was the first to develop a ground-breaking machine-learning algorithm that could scan the scrolls and identify Greek letters.
When the 21-year-old computer science student from the University of Nebraska-Lincoln ran his algorithm over a piece of scroll image with some very clear crackle, it took only an hour to successfully identify five Greek letters.
It was only a few days before refinements to his model allowed the word to be clearly identified as ‘purple’.
Now that Luke has taken the Vesuvius Challenge’s ‘first letter’ prize of $40,000 for reading ten characters, the race is almost won. The only prize left is the $700,000 grand prize, which will go to the first contestant to read four or more passages from a closed scroll. The deadline for that award is December 31, 2023.
Of course, the real prize will be the ability to read all those ancient texts. And there may be much more to come than we realize. The ancient city of Herculaneum is still being excavated and experts have predicted there could be thousands of scrolls still waiting to be discovered.
Dr. Seales says the mood of the team is “unbridled optimism”. It’s only a matter of time before they’re reading entire scrolls, and unlocking the long-hidden wisdom and knowledge of our ancestors.
IC Inspiration
AI and computer technology have been taking on everything from treating Alzheimer’s Disease and helping injured people walk again, to helping restore coral reefs. So, it’s only natural that it should be used in the unending quest to read ancient texts.
In ancient times, the material used for writing was often calf, lamb, or kid skin. This material was both expensive and scarce, so it was common practice to wash it off and use it again for new writings. These documents, called palimpsest, often hide the most amazing of secrets.
In 2012, Dr. Peter Williams of the University of Cambridge assigned his students to examine images of the Codex Climaci Rescriptus, a collection of parchments from an Egyptian monastery. While reviewing a scanned image of the codex, one student pointed out that Greek lettering was faintly visible beneath the Syriac text on top.
The document was sent to the French National Centre for Scientific Research where multispectral imaging revealed what may well be the world’s oldest sky map.
Around 2,200 years ago, an astronomer named Hipparchus took on the monumental task of creating the earliest known star catalogue. This catalogue documented every object in the sky.
Unfortunately, most of his work, including the catalogue, has disappeared, leaving scholars to wonder whether the star catalogue had ever existed.
The multispectral scan of the codex revealed a poem Aratus, a contemporary of Hipparchus. The poem included some measurements directly from Hipparchus’ original star catalogue.
It’s the evidence the science community needed to be sure, once and for all, that Hipparchus did in fact do the amazing work he’s credited with, even if his efforts are lost forever.
On the other hand, the Vesuvius Challenge has given us hope that the star catalogue could still show up again, someday.
Maybe even in the depths of the incredible libraries of Herculaneum.
Joy L. Magnusson is an experienced freelance writer with a special passion for nature and the environment—topics she writes about widely in publications. Her work has been featured on Our Canada Magazine, Zooanthology, Written Tales Chapbook and more.
You may like
-
Sora AI is Every Content Creators Dream. Its Almost Here!
-
A Talking Lamp? Top 5 Unbelievable AI Technologies in 2024
-
Meet AMIE, Google’s Articulate Medical Intelligence Explorer
-
Smart Cane Revolutionizes Blind Navigation Like Never Before
-
BrainGPT: Mind-Reading Technology Turns Thoughts Into Text
-
The Humane AI Pin is Here! So is Everything You Need to Know
Technology
Sora AI is Every Content Creators Dream. Its Almost Here!
Published
8 months agoon
29 March 2024Table of Contents
OpenAI’s Sora
Sora is the Japanese word for sky, our blue expanse that is often associated with limitless dreams and possibilities.
You may have heard that OpenAI is also releasing an AI video generator called Sora AI. With It’s fantastical visuals and life-like video, it’s without a doubt the top 5 AI technologies in 2024.
OpenAI recently launched Sora’s first short video, “Air Head”, and if it proves anything, its that Sora is every content creator’s dream turned reality.
But if you’re not convinced, perhaps this video might help. Here’s a little game called, “can you spot the AI video”?
How Can Sora AI Help Content Creators?
Video producers, film makers, animator, visual artists, and game developers all have one thing in common: They are always looking for the next big thing in creative expression. Sora AI is a tool that can greatly enhance the ability content creators have to fuel their imagination and connect with their audiences.
A misconception is that AI is going to replace human artists, videographers, and animators. But if Sora’s first short film has shown anything, its that a team was still needed to create the story, narrate the scenes, and edit the videos to create the final production.
Sora won’t replace artists; it will equip them with tools to express their artistry in different ways.
Sora’s First Short Film
Auteur and Toronto-based multimedia company, Shy Kids, was granted early access to Sora AI. Shy Kids is among the few granted early access to the AI video generator for the sake of testing and refining it before launch. The video the artists generated using Sora AI is called “Air Head”.
Pretty mind-blowing to think that one day, we might be able to create an entire movie with the main character as a balloon. Think of the comedies we can create.
How Does Sora AI Work?
Sora’s first short film “Air Head” shows that Sora AI is the most advanced AI-powered video generator tool in history. Sora creates realistic and detailed 60-second videos of any topic, realistic or fantasy. It only needs a prompt from the user to build on existing information and develop whole new worlds.
What We Know So Far
Sora AI is a new technology with limited access. There’s a strategic reason to limit information of a new technology, and it’s to manage the publics expectations while polishing the final product. Sora is a very powerful tool. It might be necessary to have strong safeguards and build guidelines before releasing it. Here’s what we know so far.
Sora Release Date
OpenAI has not provided any specific release date for public availability or even a waiting list. However, many sources indicate that it may be released in the second half of 2024. Currently, Sora AI is only being made available to testers called “red teamers”, and a select group of designers—like Shy Kids— have been granted access.
Sora Price
Open AI has not yet released a price for Sora AI and has made no comment on whether there will be a free version like its other AI models. Based on other AI text-to-video generators, its likely that there won’t be a free version, and that Sora will offer a tiered subscription model that caters to users who want to dish out videos regularly.
There is also a possibility of a credit-based system, similar to its competitor RunwayML. A credit-based system is where users purchase credits, and each credit is used for a specific task related to generating a video.
Sora’s Video Length
OpenAI has said Sora can generate videos of up to a minute long with visual consistency. Scientific America states that users will be able to increase the length of the video by adding additional clips to a sequence. Sora’s first short film “Air Head” ran for a minute and twenty seconds, which indicates that Sora’s video length can be anywhere between 60-90 seconds.
Sora’s Video Generation Time
OpenAI has not revealed how long it will take Sora AI to generate a video; however, Sora will use NVIDIA H100 AI GPUs. These are GPUs designed to handle complex artificial intelligence tasks. According to estimates provided by Factorial Funds, these GPUs will allow Open AI’s Sora to create a one minute video in approximately twelve minutes.
How is Sora AI Different from Other Video Generators?
Many text-to-video generators have trouble maintaining visual coherency. They will often add visuals that are completely different for one another for each scene. This requires the videos to be further edited. In some cases, it takes longer to create the video you want by using AI than it does by creating it yourself.
Sora AI seems to surpass other text-to-video generators in the level of detail and realism it creates. It has a deeper understanding of how the physical world operates.
It Brings Motion to Pictures
Another feature that Sora AI has is its still-life photo prompts. Sora will be able to take a still-life photo, such as a portrait, and bring it to life by adding realistic movement and expression to the subject. This means that you can generate images using OpenAI’s DALL. E model, and then prompt it with the desired text of what you would like the image to do.
This is like something out of Harry Potter. One of the biggest worries is that Sora AI will be able to depict someone saying or doing something they never did. I don’t think the world’s ready for another Elon Musk Deepfake.
Will Sora AI Undermine Our Trust In Videos?
There are over 700 AI-managed fake news sites across the world. OpenAI is already working with red teamers—experts in areas of false content—to help prevent the use of Sora AI in a way that can undermine our trust in videos.
Detection classifiers will play a big role in the future of AI. Among these detection classifiers are tools that can detect AI in writing, and content credentials that show whether an image was made using AI within the contents metadata.
AI image generators like Adobe Firefly are already using content credentials for their images.
Why do Sora AI Videos Look So Good?
Sora AI generates it’s videos using ‘spacetime patches’. Spacetime patches are small segments of video that allow Sora to analyze complex visual information by capturing both appearance and movement in an effective way. This creates a more realistic and dynamic video, as opposed to other AI video generators that have fixed-size inputs and outputs.
One comment said Sora AI videos are like dreams, only clearer… That’s not a bad way to put it. Afterall, dreams are like movies our brains create, and anyone who increases their REM sleep will understand. But speaking of movies, how will Sora AI affect Hollywood?
Can Sora AI Replace Movies?
As amazing as OpenAI’s text-to-video generator is, it can’t replace actors and use them in a prolonged storyline, but it can help producers create some fantastic movies. Sora AI can be used to create pre-visuals, concept art, and help producers scout potential locations.
Pre-visualization: Sora can turn scripts into visual concepts to help both directors and actors plan complex shots.
Concept Art Creation: It can be used to generate unique characters and fantastical landscapes which can then be incorporated into the design of movies.
Location Scouting: Using the prompt description, Open AI’s Sora can expand on location options, and even create locations that are not physically realizable. An example would be a city protruding from a planet floating around in space (I sense the next Dune movie here).
IC INSPIRATION
Content creators have a story to tell, and fantastic content is often the product of a fantastic mind. Sora could transform how we share inspiring stories.
Just imagine for a moment how long it took to conceptualize the locations and characters needed to create a movie like The Lord of the Rings. How many sketches, paintings and 3d models they had to create until they got their “aha moment”, and finally found the perfect look for the movie.
I wonder how much time Sora AI can save film and content creators, and with it, how much money. If it is truly as intuitive as its appearing to be, then it could revolutionize the work of filmmakers, video creators, game developers, and even marketers.
A lot of campaigns are too hard to visualize. Take Colossal Biosciences as an example. They are a company that has created a de-extinction project to bring back the Woolly Mammoth. How on earth do you conceptualize the process of de-extinction in a campaign video without spending an enormous amount of money?
Sora could be just what the doctor ordered.
Technology
A Talking Lamp? Top 5 Unbelievable AI Technologies in 2024
Published
9 months agoon
8 March 2024Table of Contents
AI Technologies in 2024
The year 2024 is sure to go down in history as the era of Artificial Intelligence. New AI technology has reached an astounding level of accuracy and versatility. The possibilities are now endless.
In the last few months, new ideas and gadgets have been coming out at such a pace that it’s nearly impossible to keep up. Here’s a rundown of the top 5 unbelievable AI technologies in 2024 and beyond.
OpenAI: Sora AI
Open AI’s new Sora application can make realistic video footage based on your language prompt.
Sora’s work is based on a library of video samples. It’s been trained to associate these videos with certain words. For example, a user asks for a video of two pirate ships battling each other in a cup of coffee, then Sora can use those prompts to pinpoint existing videos of ships, battles, and coffee. It will study designs, movements, and concepts and use them to make an original and realistic masterpiece.
Open AI is being understandably cautious about releasing this new AI technology in 2024. They’ve only released it to a few developers so that they can fix and tweak any bugs. Nevertheless, this upcoming AI technology sends the imagination soaring.
Baracoda: BMind AI Smart Mirror
This upcoming AI mirror is supposed to increase its owner’s mental health and improve their mood.
The BMind AI Mirror was developed by health tech pioneer Baracoda. It uses Artificial Intelligence to scan its owner’s face, posture, and gestures. Then it makes an educated guess as to their current emotional state. It even asks them how they’re feeling!
Recently, a journalist at an AI conference tested the mirror out. When she stood in front of it, it began by asking how her day was going. To test it, she said her day was terrible. It then turned on a calming, blue light, offered words of encouragement, and even offered a meditation session.
This new AI technology is expected to be available to the public by the end of 2024, and with a price tag between $500-$1000.
Nobi: Smart Lamp
The one thing that most seniors fear is falling. They’re more prone to these kinds of events and are impacted by them more deeply than younger people.
That’s why Nobi, the age tech specialist, developed the Smart Lamp. This attractive ceiling light fixture is loaded with high-tech features. Its primary purpose is to prevent falls and provide critical instant help when falls occur.
It unobtrusively monitors the movements of anyone in its range. As soon as someone falls, the caregiver that is connected to the lamp is notified. They can then talk with their patient or loved one through the lamp to make sure they are okay.
After a fall, the device provides caregivers with security-protected pictures of the fall. Caregivers can analyze the pictures to see exactly what happened, allowing them to avoid falls in the future.
Pininfarina: OSIM uLove3 Well-Being Chair
There’s a new way to relax using AI technology in 2024. It’s a new chair that could find its way into the corner of your own living room.
Pininfarina is the Italian luxury firm famous for the design of the Ferrari. They’ve now released an amazing new AI smart chair that takes stress relief into new realms.
Everyone experiences stress in their own way depending on factors such as age, size, and activity level. When a user sits in the OSIM uLove3 Well-Being Chair, an AI-powered biometric algorithm measures heart rate and lung function. It uses this information to determine a body tension score. Then, it uses that information to provide a personalized system of massages and calming music.
This chair is already available for purchase. At around $7,999.00 USD, it doesn’t come cheap, but for such a cool thing, it may just be worth the price.
Swarovski: AX Visio Smart Binoculars
Bird watchers and other outdoor enthusiasts around the world will be excited to hear about Swarovski’s AX Visio Smart Binoculars.
Bird watching is a practice as old as humankind itself. It’s fun, relaxing, and easy to do. It’s hard to imagine that someone could have found a way to make it even better. Swarovski, the luxury crystal company, was up to the challenge. They’ve developed a fantastic new pair of binoculars that will make spotting much more exciting.
The binoculars use new AI technology to access an immense database of bird data from the Cornell Lab of Ornithology. Cornell is one of the premier bird research facilities in the world. It uses the database to identify the exact species of bird or mammal it’s focused on.
Work is ongoing to teach these binoculars how to identify everything: such as butterflies, mushrooms, and even stars.
Other New AI Technologies You Should Know About
2024 won’t be the only year to look forward to when it comes to Artificial Intelligence. Here’s a few other AI technologies that we’ve covered on inspiring click:
- The Human AI Pin: This is a wearable, standalone AI device that can do things like translate your speech into another language and monitor dietary choices. It is meant to replace your phone with something a little more invisible.
- Organoid Intelligence: Also known as OI. This is a new branch of AI that is creating the worlds first biological-computer using real brain cells.
- Smart Cane: The Smart cane turns ordinary walking canes into ones that can pair with smart phones; helping the visually impaired in incredible ways.
- iterate.ai Weapon Detection System: This is a free, open-source app that is offered to non-profit organization to help protect them. It can detect guns in schools and immediately inform the police.
- Google’s Articulate Medical Intelligence Explorer: This is Googles newest medical AI that runs diagnostics on patients. In the future, it may be used in clinics so that you can get in to see the doctor a lot quicker.
AI technology in 2024 is sure to make many things that we once thought of as fantasy into a reality. This leaves us wondering what inspiring innovations tomorrow will bring.
IC Inspiration
At the dawn of AI was the now famous ChatGPT. This was one of the earliest publicly available applications that allowed natural conversation between computers and humans.
Who knew it could help diagnose a young boy?
Alex started experiencing significant pain at the age of four when he tried jumping in a bouncy house his parents bought him. His nanny gave him pain medication which helped for a while, but then his symptoms started changing. He was having emotional outbursts. He started chewing things and seemed exhausted all the time. He couldn’t move his legs properly, seeming to drag the left one behind him like dead weight.
His parents took him to the doctor, and then another doctor. Some time past, and a variety of other doctors failed to come up with a diagnosis that satisfactorily explained all his symptoms.
His mother, Courtney, theorizes that this was because each specialist was only interested in or able to study their own area of expertise. His condition impacted so many areas of his life that no single specialist could get to the heart of the matter.
So, after 17 different doctors and one trip to the emergency room, Courtney tried a different approach. She asked ChatGPT.
“I went line by line of everything that was in his (MRI notes) and plugged it into ChatGPT,” she told Today.com. “I put the note in there about … how he wouldn’t sit crisscross applesauce. To me, that was a huge trigger (that) a structural thing could be wrong.”
And she was right.
Once ChatGPT knew everything there was to know about Alex’s condition and compared it to all the medical knowledge housed on the internet, “tethered cord symptom” was ChatGPT’s suggestion.
It’s a relatively rare medical condition related to the more commonly known Spina Bifida. Children who have it are born with their spinal cord attached to another part of the body, such as a cyst or a bone. It causes pain and limits a child’s freedom of movement.
Courtney took Alex to a new neurologist along with ChatGPT’s suggestion, and the doctor agreed with the computer.
Today, Alex has undergone surgery to correct his condition. He’s beginning to recover, much to his and his family’s tremendous relief. Soon, this sports-loving boy may be out on the field with his friends, again.
Experts still warn against the idea that ChatGPT can be an effective diagnostician.
It can only review and repeat what others have written and can’t think outside the box and come up with an original diagnosis. It’s still better to get a final diagnosis from a trained medical expert.
However, it’s an exciting thought that it can, in the very least, start us on the right path. It helped Alex find the treatment he deserves.
What a tremendous start to AI technologies in 2024.
Technology
Alef Model A Flying Car Pre-Orders Surge to a Whopping 2,850
Published
9 months agoon
4 March 2024Table of Contents
Alef Model A
Alef: the first letter in the Hebrew, Arabic, and Persian languages, is now the word behind the first flying car ever created.
The all-electric Alef Model A is set to hit airways in 2025. The new flying car— whose look was developed by Bugatti designer Hirash Razaghi— has recently received certification from the Federal Aviation Administration (FAA) to begin testing its flight capabilities, and pre-orders are currently surging.
CEO of Alef Aeronautics, Jim Dukhovny, was inspired to create a flying car after watching Back to the Future. Coincidentally, Alef Aeronautics started developing the car in 2015, the same year that the movie predicted we would have real flying cars.
Alef Model A Price
The Alef Model A is priced at $300,000 USD per vehicle, with pre-orders being taken right now. Buyers have the option between a general pre-order for $150 USD, or $1500 for priority queue.
According to Jim Dukhovny, 2,850 pre-orders have been placed so far. This represents more than $855,000,000 of revenue that Alef Aeronautics would make upon release. These numbers are expected to rise as the Alef Model A continues to wow audiences.
The new flying car made an appearance at the Detroit Auto Show, which resulted in a surge of pre-orders that has since increased.
How Does the Alef Flying Car Work?
The new flying car works like a conventional car that allows you to drive on streets, but it is also equipped with eight propellers, giving those who have it the ability to fly up vertically in traffic jams and then take off, leaving traffic behind—or underneath—them.
Flying Cars Can Solve the World’s Traffic Problems
In an era where some cities spend up to 156 hours a year in traffic jams, flying cars help relieve a congestion problem that researchers say needs to be addressed. In fact, drivers in major Canadian cities spent an average of 144 hours stuck in traffic in 2022 alone. This means that If traffic doesn’t get any better, a Canadian who remains a driver for the next 30 years could spend more than 150 days of their driving life stuck in traffic jams.
What is the Speed of the Alef Model A?
The flying car is a low-speed driving vehicle and allows for a speed up to 25 miles (40kpm) on the road. It has a driving range of 200 miles (322km) and a flying range of 110 miles (177km).
It is fully powered with electric energy, so besides emerging as a transformative way of reducing time wasted on congested roads, it is also emerging as an invention for sustainable transportation—much like Michelins UPTIS airless car tires.
Has the Alef Model A Actually Flown?
While prototypes were flown by ALEF between 2016 and 2019, the flying car is yet to show a live demonstration of the vehicle actually flying. In fact, the top flying speed for the ALEF Model A has not yet been revealed. This suggests smaller-scale experiments are being made before a full public flight takes place.
Alef Model A Release Date
The FAA has granted special air-worthy certification for the Alef Model A to begin testing its flight capabilities. Alef Aeronautics plans to ship out the flying cars in 2026. However, the new flying car still needs to obtain approval from the National Highway Traffic Safety Administration before flying in public and being used on roads.
The Alef Model Z Release Date and Price
Alef Aeronautics is also developing a four to six-passenger version of the flying car called the Alef Model Z. They are aiming to release it in 2035 with a much lower price tag of $35,000. This low price is meant to increase the number of people flying cars in the future. Other flying cars, like the Doroni H1, have also placed competitive prices for their eVTOL (electric vehicle take off landing) vehicles.
Alef Model A vs Doroni H1
The Alef Model A is not the only flying car. The Doroni H1 is another eVTOL (electric vehicle take off landing) vehicle that is highly anticipated. While they both share some futuristic properties, they have some key differences:
- Alef Model A: Designed to look and feel like a traditional car. This flying car is made for extended journeys, with a focus on integration into existing roadways. It can drive on roads and then elevate to fly.
- Doroni H1: Resembles a more conventional small aircraft, with a distinct emphasis on its aerial capabilities. This eVTOL is Designed for shorter flights. It is made for people who want the experience of being their own pilot.
Functionality:
Specifications | Alef Model A | Doroni H1 |
Range on road | 200 miles | N/A |
Range in air | N/A | 60 miles per trip |
Top Speed in air | N/A | 120 mph |
Capacity | 1 driver, 1 passenger | 1 pilot, 1 passenger |
Price (approx.) | $300,000 | $150,000 |
Are Flying Cars the Future?
The flying car market is estimated to reach a value of $1.5 trillion by 2040. Interestingly enough, that is about as much as the current car market is worth today. Moreover, many flying car companies are revealing plans for eVTOL’s with affordable price tags. All signs seem to lead to the possibility that flying cars will soon be a big part of the future.
As suggested in the Alef model A video, you can expect to see emergency responders, the army, and perhaps even local police flying them around, because response time is crucial for them.
But what if everyone does have a flying car in the future? What would it take to make that possible, and what would happen to our planet?
Will Flying Cars be Autonomous?
After a certain number of cars are in the air, flying would have to be autonomous. This would be the only way to ensure the safety of fliers as the sky begins to fill up vehicles (imagine distracted flying for a moment… It’s quite different than distracted driving).
Autonomous flying cars in the future would have to use radar and lidar to detect everything around them because since you can’t have lane markings in the air, the sole use of cameras would be useless for the self-flying car to stay on course; in fact, there would be no course; the entire sky could be everyone’s flying space.
What Would Happen if Everyone Did Fly Cars?
But imagine for a moment that we made it work, and this all happened. What would happen to the roads?
Just recently, the first freeway system in the world was discovered in an ancient Mayan city that remained hidden for thousands of years underneath a jungle in Guatemala.
Nature begins to take over roads when they are not being used. Everything turns green and lush, habitats start to form, and if everyone is flying from one place to another in an electric car, then the view underneath becomes the sort of sight that you used to travel miles away from home just to see.
Could the future be one where technology and nature co-exist? Where so much green makes the air you breathe so clean, that it gives a whole new meaning to the term fresh air— and it is only because we’re literally in the air.
A motivational thought indeed, and perhaps only a dream. Maybe someone who comes back to the future can let me know.
Are you reading this, Doc? If so, get in touch on our socials below.
The Humane AI Pin is Here! So is Everything You Need to Know
3D Printing in Hospitals Has Saved Children’s Lives
The Largest Body of Water in the Universe is Floating in Space. Can We Use It?
Alef Model A Flying Car Pre-Orders Surge to a Whopping 2,850
Michelin Uptis: Airless Car Tires Emerging in 2024
10 Facts About Stars That Will Absolutely Blow Your Mind
January Brain Exists, But You Can Beat the Winter Blues. Here’s How
Upside-Down Trains? Why We Should Be Seeing More of Them!
Commercial Hypersonic Travel Can Have You Flying 13,000 Miles In 10 Minutes!
3D Printed Organs Save Woman’s Life and Accidentally Pave Way for Biology-Powered Artificial Intelligence
Trending
-
Technology11 months ago
The Humane AI Pin is Here! So is Everything You Need to Know
-
Healthcare1 year ago
3D Printing in Hospitals Has Saved Children’s Lives
-
Science12 months ago
The Largest Body of Water in the Universe is Floating in Space. Can We Use It?
-
Technology9 months ago
Alef Model A Flying Car Pre-Orders Surge to a Whopping 2,850
-
Technology1 year ago
Michelin Uptis: Airless Car Tires Emerging in 2024
-
Technology2 years ago
New Atmospheric Water Generator Can Save Millions of Lives
-
Sustainability1 year ago
TeamSeas Uses Gigantic Robot to Battle Plastic Pollution
-
Sustainability10 months ago
Archangel Ancient Tree Archive: Cloning Ancient Trees to Build Strong Forests