Elon Musk has made another of his trademark predictions – this time, it’s that AI will be superior to humans within five years.
Musk has been among the most vocal prominent figures in warning about the dangers of artificial intelligence. In 2018, for example, Musk famously warned that AI could become “an immortal dictator from which we would never escape” and that the technology is more dangerous than nuclear weapons.
Speaking in a New York Times interview, Musk said that current trends suggest AI could overtake humans by 2025. However, Musk adds “that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.”
If correct, the latest prediction from Musk would mean the so-called technological singularity – when machine intelligence overtakes human – is set to happen much sooner than other experts predict. Ray Kurzweil, a respected futurist, has previously estimated the aforementioned singularity to occur around 2045.
As the founder of Tesla, SpaceX, and Neuralink – three companies which use AI far more than most – Musk isn’t against the technology, but has called for it to be regulated.
Musk also founded OpenAI back in 2015 with the goal of researching and promoting ethical artificial intelligence. Following disagreements with the company’s direction, Musk left OpenAI in 2018.
Back in February, Musk responded to an MIT Technology Review profile of OpenAI saying that it “should be more open” and that all organisations “developing advanced AI should be regulated, including Tesla.” Last year, OpenAI decided not to release a text generator which it believed to have dangerous implications in a world already struggling with fake news and disinformation campaigns.
Two graduates later recreated and released a similar generator to OpenAI’s, with one saying that it “allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses.”
OpenAI has since provided select researchers access to their powerful text generator. The latest version, GPT-3, has been making headlines in recent weeks for the incredible things it can achieve with limited input. GPT-3 offers 175 billion parameters compared to GTP-2’s 1.5 billion parameters – which shows the rapid pace of AI advancements. However, Musk’s prediction of the singularity happening within five years perhaps needs to be taken with a pinch of salt.
AI Detects With Higher Accuracy
ML For Social Media Auditing
Designing Molecular Structure
AI-powered Medical Imaging
Drones for the Rescue
The question of how computers can contribute to controlling the COVID-19 pandemic is being posed to experts in artificial intelligence (AI) all over the world.
AI tools can help in many different ways. They are being used to predict the spread of the coronavirus, map its genetic evolution as it transmits from human to human, speed up diagnosis, and in the development of potential treatments, while also helping policymakers cope with related issues, such as the impact on transport, food supplies and travel.
But in all these cases, AI is only effective if it has sufficient examples to learn from. As COVID-19 has taken the world into unchartered territory, the “deep learning” systems, which computers use to acquire new capabilities, don’t necessarily have the data they need to produce useful outputs.
“Deep learning is good at predicting generic behaviour, but is not very good at extrapolating that to a crisis situation when almost everything that happens is new,” cautions Leo Kärkkäinen, a professor at the Department of Electrical Engineering and Automation in Aalto University, Helsinki and a fellow with Nokia’s Bell Labs. “If people react in new ways, then AI cannot predict it. Until you have seen it, you cannot learn from it.”
Despite this caveat, Kärkkäinen says robust AI-based mathematical models are playing an important role in helping policymakers understand how COVID-19 is spreading and when the rate of infections is set to peak. “By drawing on data from the field, such as the number of deaths, AI models can help to detect how many infections are in the dark,” he adds, referring to undetected cases that are still infectious. That data can then be used to inform the establishment of quarantine zones and other social distancing measures.
It is also the case that AI-based diagnostics that are being applied in related areas can quickly be repurposed for diagnosing COVID-19 infections. Behold.ai, which has an algorithm for automatically detecting both lung cancer and collapsed lungs from X-rays, reported on Monday that the algorithm can quickly identify chest X-rays from COVID-19 patients as ‘abnormal’. This instant triage could potentially speed up diagnosis and ensure resources are allocated properly.
Identifying what’s working and what isn’t
The urgent need to understand what kinds of policy interventions are effective against COVID-19 has driven various governments to quickly award research grants to harness AI. One recipient is David Buckeridge, a professor in the Department of Epidemiology, Biostatistics and Occupational Health at McGill University in Montreal. Armed with a grant of C$500,000 (€323,000), his team is combining natural language processing technology with machine learning tools, such as neural networks (a set of algorithms designed to recognise patterns), to analyse more than two million conventional media and social media reports about the spread of the coronavirus from all over the world. “This is unstructured free text – classical methods can’t deal with it,” Buckeridge said. “We want to extract a timeline from online media, that shows systematically what’s working where.”
The team at McGill is using a mixture of supervised and unsupervised machine learning methods to distil the key pieces of information from the online media reports. Supervised learning involves feeding a neural network with data that has been annotated, whereas unsupervised learning simply employs raw data. “We need a framework for bias – different media sources have a different perspective and there are different government controls,” says Buckeridge. “Humans are good at spotting that, but it needs to be built into the AI models.”
The information derived from the news reports will be combined with other data, such as COVID-19 case reports, to give policymakers and health authorities a much more complete picture of how and why the virus is spreading differently in different countries. “This is applied research in which we will look to get important answers fast,” Buckeridge noted. “We should have some results of relevance to public health in April.”
AI can also be used to help identify individuals who might be unknowingly infected with COVID-19. Chinese tech company Baidu says its new AI-enabled infrared sensor system can monitor the temperature of people in the proximity and quickly determine whether they may have a fever, one of the symptoms of the coronavirus. In an 11 March article in the MIT Technology Review, Baidu said the technology is “being used in Beijing’s Qinghe Railway Station to identify passengers who are potentially infected, where it can examine up to 200 people in one minute without disrupting passenger flow.” A report from the World Health Organization on how China has responded to the coronavirus says the country has also used big data and AI to strengthen contact tracing and the management of priority populations.
AI tools are also being deployed to better understand the biology and chemistry of the coronavirus and pave the way for the development of effective treatments and a vaccine. For example, start-up Benevolent AI says its “AI-derived knowledge graph” of structured medical information has enabled the identification of a potential therapeutic. In a letter to The Lancet, the company described how its algorithms queried this graph to identify a group of approved drugs that could inhibit viral infection of cells. Benevolent AI concluded that the drug baricitinib, which is approved for the treatment of rheumatoid arthritis, could be of use in countering COVID-19 infections, subject to appropriate clinical testing.
Similarly, US biotech Insilico Medicine is using AI algorithms to design new molecules that could limit COVID-19’s ability to replicate in cells. In a paper published in February, the company says it has taken advantage of recent advances in deep learning to remove the need to manually design features and learn nonlinear mappings between molecular structures and their biological and pharmacological properties. “A total of 28 machine learning models generated molecular structures and optimised them with reinforcement learning” using a scoring system that reflected the desired characteristics, the researchers said.
Some of the world’s best-resourced software companies are also grappling with this challenge. DeepMind, the London-based AI specialist owned by Google’s parent company Alphabet, believes its neural networks can speed up the often-laborious process of solving the structures of viral proteins. It has developed two methods for training neural networks to predict the properties of a protein from its genetic sequence. “We hope to contribute to the scientific effort … by releasing structure predictions of several under-studied proteins associated with SARS-CoV-2, the virus that causes COVID-19,” the company said. These can help researchers to build understanding of how the virus functions and be used in drug discovery.
The pandemic has led enterprise software company Salesforce to diversify into life sciences, in a study demonstrating that AI models can learn the language of biology, just as they can do speech and image recognition. The idea is that the AI system will then be able to design proteins, or identify unknown proteins, that have specific properties, which could be used to treat COVID-19.
Salesforce fed the amino acid sequences of proteins and their associated metadata into its ProGen AI system. The system takes each training sample and formulates a game in which it tries to predict the next amino acid in a sequence.
“By the end of training, ProGen has become an expert at predicting the next amino acid by playing this game approximately one trillion times,” said Ali Madani, a researcher at Salesforce. “ProGen can then be used in practice for protein generation by iteratively predicting the next most-likely amino acid and generating new proteins it has never seen before.” Salesforce is now seeking to partner with biologists to apply the technology.
Many undergraduate engineering students do get mixed feelings when they are face to face with either using FORTRAN or MATLAB to solve various engineering problems. Actually it is not a competition as FORTRAN is a core programming language while MATLAB is a software that is used for solving mathematical problems. While it is possible to code and to plot using MATLAB, Fortran actually gives you the versatility that you need for the solution of engineering problems.
FORTRAN which is also known as a programming language for scientists and engineers was once the king of programming during 1980s and 1990s. From 2000 onward, with the introduction of more user friendly programming languages with visual interface, FORTRAN was lesser preferred by some application engineers and from 2010 onward this trend increased. However, it must be remembered that thousands of engineers wrote thousands of FORTRAN codes in these three decades and thus these codes are still applicable today in the field of Fluid Dynamics, CFD, Aerodynamics, Propulsion, Nuclear, Mechanical and other main fields of engineering.
Thus, there is no need to reinvent the wheel as most of these codes can still be used today to solve complex engineering problems. Hence, FORTRAN needs to be learned by all the engineering students so that they have the ability to use any of these ready made programs to apply them to current engineering problems in their field. You can use advanced techniques in Finite Volumes to solve problems using fluids and you can use advanced techniques in Finite Elements methods to solve structural related problems which will help you to solve 80% of all the engineering problems. FORTRAN can be especially useful in engineering problems that are more focused toward partial differential equations as it is easier to model these equations using FORTRAN. You can use the lectures in this website to build up your knowledge of FORTRAN so that you can adapt it to the various codes.
Now with the new developments you can also use Visual Fortran to make better user interfaces.