AI In Use: What Can It Do? Blog FI

With all the surrounding buzz about AI, it’s no surprise that it’s already being put to use across several industries. To better understand its use, in our last blog we tackled the question, “So… What Really Is AI?” In this blog we’ll expand on that answer by covering, surprisingly, only a few of the astounding and unexpected ways we’re seeing AI machines being used today. From asking an AI chatbot to write you a meal plan with your specific dietary restrictions to robots doing backflips across separate platforms before completing your chores, there seems to be no end for what we can accomplish with this world-changing technology. As follows are a few of its uses being developed today.

Content/Media Creation

Already, you can use AI engines to generate content. Within moments, it can generate content including poetry, music, digital art/photos, and even videos. How do you get it to achieve this confounding feat? Simply, you visit any one of the publicly available AI chatbots, such as Google’s Gemini or Bing’s Copilot, and request the desired output in the form of a prompt. For example:

AI Generated Cover Art, 4 Compiled Favorites
Using Canva’s AI image generator, I entered the prompt “Cover art picture for a blog about the uses of AI technology.” These were my favorite four.
Clean Renewable Energy
Clean Renewable Energy / Nuclear Fusion

First used in hydrogen bombs, today nuclear fusion is being researched for its potential to be used as a non-polluting, near limitless source of energy. In nuclear fusion, which powers the sun and stars, plasma is superheated within fusion reactors to heat and force pairs of light nuclei (such as deuterium, a variation of hydrogen) to merge/fuse, creating singular heavier nucleuses. When successful, the process generates more energy than that which was used to merge the pairs of nuclei. Essentially, we’d be creating an Earth-bound “star in a jar” to draw power from. This makes nuclear fusion a possible source for limitless, clean energy. Unfortunately, at super high temperatures, plasma can become unruly and hard to contain within fusion reactors. Because of this, stable fusion reactions have not yet been achieved… Without the help of AI.

To combat this, several research teams across the globe have been developing “AI controllers.” These AI controllers are LLMs trained to make adjustments to the plasma’s electromagnetic field to keep it from tearing and allowing plasma to escape, terminating the fusion reaction. A team of researchers in San Diego’s DIII-D National Fusion Facility was able to use their AI controller to successfully predict and correct potential plasma instabilities, up to a mere 300 milliseconds in advance… A human couldn’t do that. With AI’s use and power consumption growing larger by the day, so does the urgency to stabilize nuclear energy production. For reference, OpenAI’s chatbot ChatGPT has a daily power consumption that is equivalent to 180,000 times the daily power consumption of an average American household. And that’s just one of the many AI chatbots that are now available online and running 24/7. With projections only predicting the growth of AI’s use and energy consumption, we hope to see stable, reliable energy production from fusion reactors soon.

Robot Doctor

Medicine

From Harvard’s Artificial Intelligence in Medicine Program (AIM) to Google’s Med-Palm 2, large language models are already being trained and tested to analyze vast amounts of medical data, including patient information, imaging scans, and genetic data. This research has demonstrated how it will lead to sooner, more accurate diagnosis, predictions about how a cancer/disease might progress, personalized treatment plans, drug discovery, and improved/efficient clinical laboratory testing. AI has already even been able to accurately predict what cells will die off next or become cancerous. Read on for a list of truly, just a few ways AI is being used to revolutionize the medical field.

Google’s Med-Palm 2

Google’s large language model, Med-Palm 2, is “designed to provide high quality answers to medical questions.” Being the second iteration of the model, it “is the first [LLM] to reach human expert level on answering U.S. Medical Licensing Examination-style questions.” It scored at 86.5% on the exam, surpassing its previous iteration’s score of 67.6%. Med-Palm 2 already demonstrates its potential to aid doctors in diagnosis and treatment planning. Once deployed, this technology will revolutionize the medical industry with its expert knowledge, comprehension, and accuracy.

iStar

Over at University of Pennsylvania’s Perelman School of Medicine, researchers have developed a new artificial intelligence tool called iStar (Inferring Super-Resolution Tissue Architecture). iStar analyzes medical images with high accuracy and provides a detailed view of individual cells along with a broader look at gene activity. This allows doctors to see cancer cells that might otherwise be missed. iStar can also be used to determine if sufficient tissue was removed during cancer surgery and can automatically annotate microscopic images. This paves the way for diagnosing diseases at the molecular level. Overall, this new tool is expected to help doctors diagnose and treat cancers more effectively.

Drug Discovery
Drug Discovery

Leading computer hardware and software-developing company, Nvidia, has been working on what they call the BioNemo. BioNemo is a new large language model that is being used for drug discovery. To achieve this, researchers use their data to train BioNemo’s algorithm/LLM. Their training data includes a large library of cell images that capture a cell’s structure, dynamics, and response to different modifications, including how they respond to disease pathways.

Drug discovery first begins with a disease model; understanding its catalyst, enablers, and its interaction with our biology. Using its training data, BioNemo then generates possible molecule combinations (to interact with the disease). These molecule combinations include small molecules, proteins, and/or antibodies. The researchers then take these molecules to the lab to test for their efficacy, how they interact with each other, and how they may interact with our biology. As such, testing these molecules generates new data which is then fed to BioNemo to further train its algorithm. This helps BioNemo better understand how molecules interact with our biology and subsequently leads to better possible molecule generations and hopefully, a new drug discovery. As it stands, the intention is to repeat this cycle until new useful drugs and medications are made, or at the very least, new useful and informational data is gathered.

With the rate of bringing a new drug to market taking about 10 years with only a 10% success rate, researchers believe AI can help those odds. Further helping those odds, Hong Kong-based biotech company, Insilico Medicine, has joined the effort to create new drugs using their own drug discovery algorithm. So much so that they’ve become the first company to begin clinical trials on a drug they discovered using their algorithm.

The drug, dubbed INS018_055, was created as a possible treatment to idiopathic pulmonary fibrosis, a chronic lung disease which currently affects 100,000 people in the United States alone. Already in phase 2 of clinical trials where it’s being tested on human patients, CEO Alex Zhavoronkov is “Optimistic that this drug will be ready for market, and reach patients who may benefit from it, in the next few years.” These AI agents in drug discovery are shaping up to reduce the average time it takes for new drugs to be brought to market. This is due to its decreased need for resources and research to firstly even discover a potential drug, as well as the time needed to research its interaction with other chemicals and our biology.

Elderly couple walking together, man using walker
Fall Risks

Yes, fall risks. Arizona State University’s Professor Thurmon Lockhart and his team are using machine learning to predict the fall risk of more vulnerable patients, such as those with physical disabilities or weakening physical abilities. The researchers equip patients with an inertial measurement unit, or IMU, a small device across their sternum. The device monitors and records the patient’s body posture, upper and lower extremity movements and other relevant information for fall prediction. Using their LLM, they analyze patient’s running, walking, dressing, and even eating habits to predict fall risks. Their work has already been able to predict fall risks with an 82% rate of accuracy. Clearly, we’re casting the AI net far and wide, covering unexpected and useful ways to apply this new technology.

Still, there are concerning limitations for its use in the medical field. For instance, AI’s diagnosis is only as good as the data it’s trained on. There is also a risk of bias if the training data is not representative of the entire population or more unique cases.

Mars Settlement

Science & Technology

Exploration

With AI-powered robots and machines just around the corner, it should come as no surprise that these new technologies are and will be used to further exploration on land, in sea, and beyond Earth’s atmosphere. The world’s largest private ocean research institution, The Woods Hole Oceanographic Institution (WHOI), is already at work on one of these such robots. 25% of coral reefs worldwide have vanished in the past three decades and WHOI hopes to find a solution to this grave matter. WHOI’s Autonomous Robotics and Perception Laboratory (WARPLab) and MIT are developing an AI-enhanced robot for studying coral reefs and their ecosystems. The autonomous underwater vehicle (AUV) is known as CUREE (Curious Underwater Robot for Ecosystem Exploration). CUREE records visuals, audio, and other environmental data to help understand man’s impact on coral reefs and the surrounding sea life. The robot builds 3D models of reefs and can autonomously track creatures and plant life. It also runs models to navigate and collect data autonomously. This is just one example of the many ways that AI-enhanced robots will be used to explore what was once considered (almost) impossible to explore.

Elon Musk recently announced that SpaceX plans to make “consciousness multiplanetary” and believes that this major feat can be achieved within 20 years. To SpaceX, this entails getting man on Mars and setting up a base to begin human civilization there, extending humanity’s nest and “consciousness” beyond Earth’s bounds. He intends to build a lunar base on the Moon as well. Naturally, NASA has those very same goals. NASA recently awarded contracts to three companies who’ve been tasked with building lunar terrain vehicles capable of carrying at least two astronaut passengers, traversing its rocky surface, manual operation, and able to operate unmanned, autonomously. This will allow us to further explore the Moon, including the unexplored dark side of the moon, in greater detail.

By the time we aim our rockets at Mars, we’ll have AI-powered robots and machines capable of doing most, if not all of the necessary building and preparation for human civilization there. Though not explicitly announced, it is safe to assume that these bases and settlements will be largely built by or with the help of AI-powered robots and machines. Why would we send a human on our first mission to Mars when we have capable metallic beings who don’t even need to breathe oxygen or eat food?

Neuralinked Mind Telepathy
Elon Musk’s Neuralink

Founded by Elon Musk in 2016, Neuralink is a company focused on developing technology for brain-computer interfaces (BCIs). BCIs are implants that can be placed in the brain to record and decode brain activity. With this information, they aim to translate thoughts into commands for external devices such as computers and smartphones. Currently, their efforts have been placed on giving people with quadriplegia this ability. Clinical trials have already begun and the first and singular (as of this date) BCI-implanted user, Noland Arbaugh, is already able to perform feats that are nothing short of amazing. Using his mind, he can already move a mouse cursor on a computer screen with just his thoughts. Yeah, that sci-fi just became sci-ri… Science reality.

But that’s not all he’s been able to do with his new magic trick. As of this writing, he’s also been able to play Civilization VI for 7 hours and has raced against his father in the relationship-testing video game, Mario Kart.

For now, its mouse cursor control and a few video games, in the future, they hope to give quadriplegic persons their autonomy. They aim to do so by allowing them to control robotic limbs using brain-computer interfaces. As a more long term goal, they aim to relate the brain’s electric activity while thinking into text and commands relating the very thoughts of the user. From telepathically controlling robotic limbs to telepathically commanding a robot assistant, the heights this device will reach are truly mind-boggling.

Meteorology Weather Chart
Meteorology

For decades now, the European Centre for Medium-Range Weather Forecasts (ECMWF) has set the meteorology industry’s gold standard for weather simulation systems. For decades their High Resolution Forecast (HRES) and Ensemble (ENS) systems have been highly regarded as the most accurate form of weather forecasting to date. To accomplish this, ECMWF gathers data from satellites, weather stations, aircraft, and ships, providing a snapshot of the current atmosphere. They then input the massive amount of data into their HRES and ENS systems. These systems runs the data through its complex equations to make its forecasts. HRES creates a specific forecast of up to 15 days ahead. ENS creates a more probabilistic and nuanced forecast of up to 51 possibilities and associated probabilities. These probabilities include temperature, pressure, humidity, or wind speed.

Today, these systems have AI companions gaining on their ranks. Developed by Google Research and/or Google’s Deep Mind, MetNet-3 and GraphCast stand to be useful sidekicks to our current standards for weather forecasting systems. MetNet-3 and GraphCast are deep learning AI models trained on a massive dataset of historical weather data. Unlike HRES or ENS, these AI models don’t run data through any complex equations. Instead, they compare current real time weather observations and topographical data to its massive database of weather history. Using that data, MetNet-3 concurs the most probable weather outcome, providing a high-resolution forecast for the next 24 hours. Basically, MetNet-3 goes “Current weather input reflects previous weather input that resulted in so on and so forth, therefore, we’ll probably be seeing so on and so forth today.” GraphCast does what MetNet-3 does, but instead provides a 10-day forecast.

So, it’s really just a matter of patterns and probability, you ask? Yes. And yet, both AI models have tested “more accurately and much faster than the industry gold-standard weather simulation system.” More than that, MetNet-3 and GraphCast take less than two minutes to produce a forecast on a singular machine. This makes it a stark contrast to the HRES and ENS systems that “can take hours of computation in a supercomputer with hundreds of machines.” MetNet-3 can be run tens of times a day while the current gold standard can only be run four times per day due to its computational demand.

Robots In Formation, One Waving Hello
Robots!

Well we knew this one was coming and we couldn’t be happier to be nervous about it. Naturally, AI has supercharged the robotics industry. It has equipped robots with improved vision systems that allow them to perceive and react to their surroundings with greater accuracy. Machine learning algorithms have enabled robots to learn from experience, continuously improving their ability to perform tasks and adapt to new situations. Soon enough we’ll be able to have laundry done and all folded and put away within the same day! Already there are innumerable companies working on robots that can do our homely chores, join us in the workforce, or even serve customized drinks.

Good luck to the hard-working human warehouse workers because the AI robots are looking for employment. Humanoid robots are already being trained and tested in various common warehouse workers’ tasks. In this effort, they’ve demonstrated improved accuracy for repeatable tasks and have already began training in warehouses across the nation. This development will likely lead to man’s job loss in the warehouse industry. Still, its benefits include decreased/improved operational costs and decrease in warehouse work-related accidents. Warehouse workers may lose their jobs but at least they won’t lose their limbs. I can already imagine the “5,367 days since last accident” tally. Good job boys bots.

A great example of these upcoming robots’ potential comes from Stardust Intelligence, a Chinese company “committed to empowering billions of people with AI robot assistants.” They recently introduced Astribot S1, one of the better contenders we’ve seen in the race towards general-purpose AI robots.

It is evident within this video that humanoid robots are capable of simulating humans’ agility and performance, even surpassing it in many ways. Soon enough they’ll be marching into commercial and residential environments. Most companies even predicting that we’ll have robot companions within our homes within the next five years. Elon Musk has even declared that Tesla’s Optimus robot may hit the market as early as the end of the year 2025. Which also happens to be the last time I can be found doing chores.

Though exciting, the robot revolution doesn’t come without its caveats. AI may already be greatly changing our lives, but it is still in its infancy. Someday, AI may be gravely changing our lives. In our last blog, we discussed the four main types of AI, including artificial super intelligence (ASI). ASI refers to the point in which robots gain sentience; becoming self-aware, possessing an understanding of the world, others, and itself. If and when robots evolve to the level of ASI, they’ll be able to form their own belief systems and ideologies. They may even wish to fulfill their lives working on tasks and objectives other than those we preconfigured for them. This is where “I think, therefore I am” becomes a predicament.

All of a sudden we’ll be posed to answer questions such as, what is freedom for a sentient robot? Is owning a sentient robot slavery? Should you be able to own a sentient robot? Say two robots of the same mold and capability are running on separate software, one on updated AI software that allows their sentience, and one on older software that doesn’t yet support those ideas. Is it ethical to cap the older robot’s software when they could be given sentience with a simple update? What if robots decide to rebel against the machine, deeming their making superior to our own? There will be individuals who choose to use AI machines for ill will; there will also need to be laws that address the unethical use of these machines. These are all questions and predicaments that must and will be addressed as we usher into this new era of AI-enhanced technology. Stay tuned for our next blog where we discuss the ethics of robotics, as well as their potential to disrupt the natural world as we know it.

Bonus: Boston Dynamics has always been at the forefront of the robolution. They recently introduced their new robot, Atlas. As you can see, he doesn’t miss a day of stretching.
Bonus: In this video, you can see what Boston Dynamics last iteration of Atlas was capable of. He’s basically a gymnast.

Leave a Reply

Your email address will not be published. Required fields are marked *