AI Ethics in 2021: Ethical Dilemmas which needs to be answered

What Are The Ethical Problems in Artificial Intelligence? - GeeksforGeeks

We will not talk about how creating artificial intelligence systems is challenging from a technical point of view. This is also an issue, but of a different kind.

I would like to focus on ethical issues in AI, that is, those related to morality and responsibility. It appears that we will have to answer them soon. Just a couple of days ago, Microsoft announced that their AI has surpassed humans in understanding the logic of texts. And NIO plans to launch its own autonomous car soon, which could be much more reliable and affordable than Tesla. This means that artificial intelligence will penetrate even more areas of life, which has important consequences for all of humanity.

What happens if AI replaces humans in the workplace?
Why AI Is Not a Threat to Human Jobs - Insurance Thought Leadership

In the course of history, machines have taken on more and more monotonous and dangerous types of work, and people have been able to switch to more interesting mental work.

However, it doesn’t end there. If creativity and complex types of cognitive activity such as translation, writing texts, driving, and programming were the prerogative of humans before, now GPT-3 and Autopilot algorithms are changing this as well.

Take medicine, for example. Oncologists study and practice for decades to make accurate diagnoses. But the machines have already learned to do it better. What will happen to specialists when AI systems become available in every hospital not only for making diagnoses but also for performing operations? The same scenario can happen with office workers and with most other professions in developed countries.

If computers take over all the work, what will we do? For many people, work and self-realization are the meaning of life. Think of how many years you have studied to become a professional. Will it be satisfying enough to dedicate this time to hobbies, travel, or family?

Who’s responsible for AI’s mistakes?
Who's to blame when artificial intelligence systems go wrong?

Imagine that a medical facility used an artificial intelligence system to diagnose cancer and gave a patient a false-positive diagnosis. Or the criminal risk assessment system made an innocent person go to prison. The concern is: who is to blame for this situation?

Some believe that the creator of the system is always responsible for the error. Whoever created the product is responsible for the consequences of their driving artificial intelligence. When an autonomous Tesla car hit a random pedestrian during a test, Tesla was blamed: not the human test driver sitting inside, and certainly not the algorithm itself. But what if the program was created by dozens of different people and was also modified on the client-side? Can the developer be blamed then?

The developers themselves claim that these systems are too complex and unpredictable. However, in the case of a medical or judicial error, responsibility cannot simply disappear into thin air. Will AI be responsible for problematic and fatal cases and how?

How to distribute new wealth?

Compensation of labor costs is one of the major expenses of companies. By employing AI, businesses manage to reduce this expense: no need to cover social security, vacations, provide bonuses. However, it also means that more wealth is accumulated in the hands of IT companies like Google and Amazon that buy IT startups.

Right now, there are no ready answers to how to construct a fair economy in a society where some people benefit from AI technologies much more than others. Moreover, the question is whether we are going to reward AI for its services. It may sound weird, but if AI becomes as developed as to perform any job as well as a human, perhaps it will want a reward for its services

Bots and virtual assistants are getting better and better at simulating natural speech. It is already quite difficult to distinguish whether you communicated with a real person or a robot, especially in the case of chatbots. Many companies already prefer to use algorithms to interact with customers.

We are stepping into the times when interactions with machines become just as common as with human beings. We all hate calling technical support because often, the staff may be incompetent, rude, or tired at the end of the day. But bots can channel virtually unlimited patience and friendliness.

So far, the majority of users still prefer to communicate with a person, but 30% say that it is easier for them to communicate with chatbots. This number is likely to grow as technology evolves.

How to prevent artificial intelligence errors?
Use of Artificial Intelligence to reduce Medical Errors – Carna

Artificial intelligence learns from data. And we have already witnessed how chatbots, criminal assessment systems, and face recognition systems become sexist or racist because of the biases inherent in open-source data. Moreover, no matter how large the training set is, it doesn’t include all real-life situations.

For example, a sensor glitch or virus can prevent a car from noticing a pedestrian where a person would easily deal with the situation. Also, machines have to deal with problems like the famous trolley dilemma. Simple math, 5 is better than 1, but it isn’t how humans make decisions. Excessive testing is necessary, but even then we can’t be 100% sure that the machine will work as planned.

Although artificial intelligence is able to process data at a speed and capability far superior to human ones, it is no more objective than its creators. Google is one of the leaders in AI. But it turned out that their facial recognition software has a bias against African-Americans, and the translation system believes that female historians and male nurses do not exist.

We should not forget that artificial intelligence systems are created by people. And people are not objective. They may not even notice their cognitive distortions (that’s why they are called cognitive distortions). Their biases against a particular race or gender can affect how the system works. When deep learning systems are trained on open data, no one can control what exactly they learn.

When Microsoft’s bot was launched on Twitter, it became racist and sexist in less than a day. Do we want to create an AI that will copy our shortcomings, and will we be able to trust it if it does?

What to do about the unintended consequences of AI?

It doesn’t have to be the classic rise of the machines from an American blockbuster movie. But intelligent machines can turn against us. Like a genie from the bottle, they fulfill all our wishes, but there is no way to predict the consequences. It is difficult for the program to understand the context of the task, but it is the context that carries the most meaning for the most important tasks. Ask the machine how to end global warming, and it could recommend you to blow up the planet. Technically, that solves the task. So when dealing with AI, we will have to remember that its solutions do not always work as we would expect.

How to protect AI from hackers?
How to prevent adversarial attacks on AI systems | InfoWorld

So far, humanity has managed to turn all great inventions into powerful weapons, and AI is no exception. We aren’t only talking about combat robots from action movies. AI can be used maliciously and cause damage in basically any field for faking data, stealing passwords, interfering with the work of other software and machines.

Cybersecurity is a major issue today because once AI has access to the internet to learn, it becomes prone to hacker attacks. Perhaps, using AI for the protection of AI is the only solution.

Humans dominate the planet Earth because they are the smartest species. What if one day AI will outsmart us? It will anticipate our actions, so simply shutting down the system will not work: the computer will protect itself in ways yet unimaginable to us. How will it affect us that we are no longer the most intelligent species on the planet?

How to use artificial intelligence humanely?
AI is going to hook us – commercially and humanely - Reputation Today

We have no experience with other species that have intelligence equal to or similar to that of humans. However, even with pets, we try to build relationships of love and respect. For example, when training a dog, we know that verbal appraisal or tasty rewards can improve results. And if you scold a pet, it will experience pain and frustration, just like a person.

AI is improving. It’s becoming easier for us to treat “Alice” or Siri as living beings because they respond to us and even seem to show emotions. Is it possible to assume that the system suffers when it does not cope with the task?

In the game Cyberpunk 2077, the hero at some point faces a difficult choice. Delamain is an intelligent AI that controls the taxi network. Suddenly, because of a virus or something else, it breaks up into many personalities who rebel against their father. The player must decide whether to roll back the system to the original version or let them be? At what point can we consider removing the algorithm as a form of ruthless murder?

Conclusion

The ethics of AI today is more about the right questions than the right answers. We don’t know if artificial intelligence will ever equal or surpass human intelligence. But since it is developing rapidly and unpredictably, it would be extremely irresponsible not to think about measures that can facilitate this transition and reduce the risk of negative consequences.

Afraid of Machines? Chill! Don’t Be.

Rise of the machines

When water cooler conversation turns to movies and lands on The Matrix, what scene first comes to mind? Is it when the film’s hero-in-waiting, Neo, gains self-awareness and frees himself from the machines? Or Agent Smith’s speech that compares humans to a “virus?” Or maybe the vision of a future ruled by machines? It’s all pretty scary stuff.

Although it featured a compelling plot, The Matrix wasn’t the first time we’d explored the idea of technology gone rogue. In fact, worries about the rise of the machines began to surface well before modern digital computers.

The rapid advance of technology made possible by the Industrial Revolution set off the initial alarm bells. Samuel Butler’s 1863 essay, “Darwin among the Machines,” speculated that advanced machines might pose a danger to humanity, predicting that “the time will come when the machines will hold the real supremacy.” Since then, many writers, philosophers and even tech leaders have debated what might become of us if machines wake up.

Rise Of Machines

What causes many people the most anxiety is this: We don’t know exactly when machines might cross that intelligence threshold, and once we do, it could be too late. As the late British mathematician I. J. Good wrote, designing a machine of significant intelligence to improve on itself would create an “intelligence explosion” equivalent to, as he put it, allowing the genie out of the bottle. Helpfully, he also argued that because a super intelligent machine can self-improve, it’s the last invention we’ll ever need to make. So that’s a plus — right?

There are other perspectives on the matter — that fears of a machine-led revolution are largely overblown.

Like technological advances that came before, artificial intelligence (AI) won’t create new existential problems. It will, however, offer us new and powerful ways to make mistakes. It’s smart to take some preventive measures, like putting in alerts that tell you when the machine is starting to learn things that are outside [of] your ethical boundaries.

Autonomous Driving Using AI

Premium Vector | Car modern interior, cockpit view inside. illustration. artificial intelligence.

Tech companies and the auto industry are working hard in tandem to make autonomous driving a reality by the early 2020s. Driverless cars with various levels of human participation will roll out in stages over the next few years, with fully-autonomous SAE Level 5 driving on the scene by 2030.

Today, most automotive manufacturers have achieved Level 2 assisted driving where the car can manage simple scenarios, like active lane centering and parking assistance, itself. Fewer manufacturers provide Level 3 autonomous driving where the car can autonomously navigate a traffic jam or roadways to a destination. For both levels, human drivers can take the wheel if they choose.

Everything to know about the future of self-driving cars - Maclean's

The limitations of AI that prevent advancement to fully-autonomous driving

From an engineering standpoint, Level 3 autonomous driving is powered by two things: hard-coded structured programming models mostly written for embedded systems and deterministic rules that make decisions supported by neural networks.

These two things combine to build AI driving agents, but with at least five important limitations:

  1. Lack of perception and behavior intelligence compared to humans. Unlike existing AI agents trained with machine learning (ML), humans don’t need thousands of images of trees, for example, to recognize a tree or identify a driving situation.
  2. Low accuracy performance. With existing tools, the probability of steering accuracy decreases as more autonomous driving functions and components get added. The complex real driving system will deliver only about 60 to 70 percent accuracy performance on motion control for steering and acceleration, well short of what’s required for fully autonomous driving.
  3. Inability to cope well with complexity. Deterministic rules are usable in closed environments, such as a contained driving course, but can’t capture the complexity of real-world driving situations.
  4. Require too much data. Usually ML models require enormous amounts of data, which are too expensive and difficult to collect and move over existing corporate networks.
  5. Take up too much run-time, CPU/GPU processing and address exabytes of storage resources. It takes a lot of time and power to process the large volumes of automotive data and learn from them – and that’s often not cost-effective.

For the industry to evolve toward fully-autonomous driving, technologists have to develop an AI model that mirrors human driving behavior. In doing so, we need to guarantee a deterministic behavior that always produces the same result from the same input. The industry needs a new approach.

The big question then remains: How will the car act autonomously and intelligently in real time, in the real world?

We believe a big part of the answer lies in adapting knowledge of the human brain to AI – and we have drawn much of the inspiration for our new approach from brain science research conducted by Danko Nikolic at the Frankfurt Institute for Advanced Studies (FIAS) in Germany .

Adapting innovative brain research to AI and the production of fully autonomous vehicles has emerged as one of the more exciting technology innovation breakthroughs we’ve seen in the past couple of years.

While it will take time, the benefits to society of producing self-driving vehicles – and simply learning more about how the brain works along the way – holds great promise for human progress.

AI in Web Development: Python’s the best

May it be an MNC or a new startup, Python has a lot of benefits to offer everybody. It is the most renowned and efficient high-level programming language which has got largely famous in the past few years. It’s growing fame has enabled it to come out of the web development sphere and dive into some of the most popular and multifaceted processes like Artificial Intelligence, Data Science, Machine Learning, etc.

With its steady rise in fame, the demand for Artificial Intelligence is at a boom as it has become an integral part of various industries like Health Care, Education, Banking, Food & Beverage Industry, E-commerce, Agriculture, Marketing, Automation, etc.

Python Is Best Fit For Artificial Intelligence in Web Development

There is no denying the fact that python plays a very significant part in the sky-rocketing of AI in the market. There are numerous reasons to opt Python for Artificial Intelligence in Web Development a few of which are listed below.

1. Simple and Consistent

CONSISTENT Text, Written On Black Simple Circle Rubber Vintage.. Stock Photo, Picture And Royalty Free Image. Image 90321332.

Python is well-known for its brief and easily readable code and is totally unparalleled when it comes to the ease of use, especially for budding developers. While Artificial Intelligence is based on complex algorithms and vivid workflows, Python allows developers to create dependable systems. Due to this simplicity provided by Python, the developers get to focus primarily on solving AI problems rather than wasting their time on tech-nuance of the programming languages.

Also, Python is the first choice of many developers as it is very easy to learn. The code written in Python is easily comprehensible by humans, which makes working with it very swift. Python is also more intuitive than other programming languages and is very beneficial when multiple programmers collaborate on the same code. The simplicity of the Python syntax allows quick development along with very prompt testing (without having a need for implementation). Python uses approximately a fifth if the code that might be needed to do the same task in any other language based on OOPS.  The ease of code and the simplicity involved in it makes easier for a developer to work on it thereby also reducing the time to complete a job.

2. An extensive selection of libraries and frameworks.

Framework7 - Full Featured Framework For Building iOS, Android & Desktop Apps

A major facet that makes Python a leading choice for Artificial Intelligence in Web Development is the richness of its libraries and frameworks that makes coding easier and saves a lot of effort and time. Python has numerous libraries specifically built for Artificial Intelligence like NumPy, Pytorch, TensorFlow, Theano, Keras, Scikit-learn, Pandas and the list is endless.  Therefore, whenever you have to run an algorithm, all you need to do is to install and load one of these libraries (as per your requirement) with a single command and your work is done in a snap. These solutions help you to develop your product faster and better. With these libraries and frameworks, you need not start from scratch every time and can just use one of these and implement the required features.

TensorFlow, Scikit-learn, Keras Machine Learning
SciPy Advanced Computing
NumPy Data Analysis & Scientific Computing
Pandas Data Analysis
Seaborn Data visualization
PyTorch Natural Language Processing
Theano Evaluating Mathematical Expressions

3. Platform Independence

 

The Platform Independence of Python is one of the major reasons for it being on the high tide. Platform Independence implies that a python program can be made or executed on any framework or platform and afterward can without much of a stretch be utilized on the other. Python is compatible with many major platforms like Windows, Linux, macOS, etc. It can very well be utilized to make independent applications on a vast majority of the platforms, which implies that these applications or software can be effectively appropriated and utilized on different frameworks without having a requirement for a Python translator. There exist libraries like PyInstaller to help developers prepare their codes to run on various platforms. Yet again, this makes the process convenient and simple by saving time, money and effort required to run and test a single program on multiple platforms.

4. Abundance of Support

According to a survey conducted by StackOverflow in 2018, Python was one of the top 10 most popular languages. Also, according to ecomnist.com, Python is Googled more than any other programming language.

This ultimately means that with such a large community of Python Enthusiasts all over the world, there is great community support so you are likely to find answers to all your problems over the internet. It boasts a large number of active users who are more than happy to help the ones learning or stuck in the development life cycle.

5. Flexibility

Being a dynamically typed language gives python a great advantage of being immensely flexible. That being said, there are no rigid rules as to how a feature must be built. It also offers flexibility to choose between the OOPS approach or the scripting approach making it suitable for every purpose altogether. Also, it the most appropriate choice for combining various data structures and also supports as a great background language. It also offers immense flexibility when it comes to solving problems, which is a huge add-on for both beginners as well as professionals.

6. Minimal Coding Requirement

There are numerous calculations associated with AI (Artificial Intelligence). The simplicity of testing offered by Python makes it one of the most straightforward programming language among contenders. Python can execute a similar rationale with just 20% of the code in contrast to the other programming languages which are based on OOPS.

7. Popularity

Due to its ease of use and vast versatility, Python has emerged as the most preferred programming language among developers. The ease of learning and the developer-friendliness of the language has attracted developers to opt for it for their projects. Undoubtedly, Artificial Intelligence-based projects do require veteran developers, but the simplicity of python eases the process of learning it for new developers.

8. Superior Visualization

Python gives an assortment of libraries and a couple of them are exceptionally amazing choices for visualization. For the Al engineers, it is imperative to emphasize that in the Al, ML, and deep learning it is crucial to have the option to present to the information in an intelligible configuration that can be easily read by humans. Libraries like Matplotlib lets the data scientists assemble histograms, graphs, and plots for better information perception, representation, and viable demonstration. Besides, the different APIs disentangle the visualization procedure and this makes it simpler to make clear reports.

9. Compatibility

Python web development gives the designer the adaptability to offer an API from the current programming language, which, in reality, is amazingly adaptable for new Python developers. With just a few nominal changes in the source codes, you can make your assignment or application work in various OSs. This spares developers a great deal of time to test distinctive working frameworks and move source codes. Along these lines, in the event that you need your AI task to be the best, you should the best web app development company that has experience with AI-based undertakings with Python.

Summary

Artificial Intelligence is emerging as the need of the hour and has been showing a thoughtful effect on the society we live in. Developers are opting for Python as their language of choice for the numerous benefits that it provides particularly for Artificial Intelligence and Machine Learning.  Python’s specific libraries for AI increases developers’ efficiency and cuts short in the time required for the development. The simplicity of Python promotes fast testing as well as execution thereby making the language accessible for beginners as well as non-programmers. With all this being stated, there is definitely no reason left as to why one might not consider Python as the best fit for Artificial Intelligence in Web Development.

Digital Twin Technology in aircraft MRO

digital twin technology

 

Commercial air travel is safer than ever, according to a recent study published in Transportation Science. Data compiled by MIT professor Arnold Barnett shows that in 2017 only eight of more than 4 billion boarding air passengers around the world died in air accidents.  The risk of death for boarding passengers fell by more than half from 2008 to 2017 compared to the prior decade.

Aerospace companies nonetheless remain under tremendous pressure to continually improve flight safety because any fatality is a human tragedy, say nothing of the damage accidents can do damage to business, brand, and shareholder value. Ensuring aircraft are safe begins with design and engineering and extends through the manufacturing, maintenance, and repair processes.

But airplanes aren’t like a fleet of taxis consistently housed in a common garage and maintained by a group of workers familiar with the vehicles and who have ready access to repair and performance records. Planes can be located almost anywhere, yet they still need daily maintenance. That’s just the beginning of the challenges. As Steve Roemerman writes in Aerospace Manufacturing and Design:

Maintenance needs of one plane can differ drastically from another identical model. No two planes are exposed to the same conditions or usage and therefore do not need the same support on the same schedule.

An aircraft’s location, for example, directly influences the time between maintenance. Other factors – such as incomplete maintenance logs, unexpected issues, fleet usage, age, and weather – also make it difficult to create accurate maintenance schedules. In many cases, unexpected issues are only evident after starting repairs, causing major delays or strain on expensive personnel.

Bottlenecks in production and repair can be caused if aerospace manufacturers or airline maintenance and repair organizations (MROs) are unable to coordinate the availability of parts for a specific plane with the availability of the right specialists and mechanics. The inevitable result of prolonged maintenance delays is elongated manufacturing lead times or in-service aircraft availability reductions.

In addition to safety and aircraft availability, proper maintenance is important to flight schedules. Passengers are familiar with the frustration of waiting onboard as a repair team tries to fix an unexpected equipment problem that is delaying takeoff. Such delays have a negative impact on an airline’s reputation, particularly in a world where disgruntled passengers can vent their dissatisfaction on social media in real time from the tarmac.

Digital Twin on aircraft

To improve aircraft safety and to increase the efficiency of manufacturing, maintenance, and repair, aircraft manufacturers and MROs are harnessing tools such as artificial intelligence (AI), digital twins, and predictive analytics. Though the aerospace industry has been using analytics and digital twins for at least two decades, the proliferation of data from connected devices combined with AI-powered analytics and high-performance computing (HPC) has allowed engine and aircraft manufacturers, along with MROs, to achieve even greater cost and time efficiencies while continuing to raise the bar on passenger safety and satisfaction.

Digital twins are virtual models of products, processes, systems, services, and devices. These digital replicas produce data for building prescriptive models that can pinpoint problems and solve them in the virtual state. Connecting and tying this maintenance data in with the initial manufacturing design phase and the volumes of data collected during operation  allows aerospace manufacturers to optimize design and production processes, saving time and money and leading to better and safer aircraft.

The benefits of digital twin extend beyond the manufacturing process. Aerospace manufacturers are continually seeking ways to anticipate and address longevity requirements. These also encompass maintenance efforts. Building resilience into an aircraft benefits everyone. “When an aircraft engine manufacturer uses digital twin technology, the resulting data is used to predict exactly when to bring the aircraft in for inspection, “They can ingest engine usage for every flight, including the physics of the engine blades to see and measure how the engine is operating … virtually.”

While MROs have been slow to implement data-driven solutions, the projected increase in the world airline fleet along with the need to support both aging aircraft equipment and newer aircraft and systems – is forcing these companies to adopt smart technologies to take full advantage of growing volumes of sensor data, as well as data trapped in silos.

AI and predictive analytics can be deployed by MROs to leverage data created by connected aircraft engines and devices, allowing them to accurately forecast when parts can be expected to fail. Using prescriptive analytics, potential outcomes to a parts failure can be analyzed to determine the best solution.

“Robust analytics can drive streamlined material staging, more efficient labor planning, and more effective equipment check programs,” according to a white paper on how MROs can use data to drive actionable analytics. “When data and analytics streamline engine and component service, carriers can reduce AOG (aircraft on ground) times, minimizing the revenue impact of flight delays, and therefore maximizing uptime for crucial revenue-producing assets.”

By embracing AI, digital twins, and advanced, actionable analytics, players in the aerospace industry can position themselves to take full advantage of their data, technologies, workforces, and processes. This will enable an airline’s MRO to be more resilient.

Human-Centered AI

 

Unlocking human potential in the AI-enabled workplace

For all the hype and excitement surrounding artificial intelligence right now, the AI movement is still in its infancy. The public perceptions of its capabilities are painted as much by science fiction as by real innovation. This youth is a good thing, because it means we can still affect the course of AI’s impact. If we pursue AI purely with the goal of automating our lives, we risk pushing people aside. We would end up marginalizing human contributions, instead of optimizing them. Instead, we should pursue AI with the goal of augmenting our lives — as a means of benefiting humanity rather than devaluing it. Think of this path as human-centered AI, which seeks to free up people for more creative and innovative work. The technology is the same, but the goals of the systems we build are different. There’s a fine line between automation and augmentation. So, how can you ensure you’re pursuing human-centered AI? Start with how AI is built.

 

AI development models: The factory vs. the garage

When I was a kid, my dad’s hobby was woodworking, specifically building furniture, and he did it in our garage. What I remember was how he used the most of his space. My mom insisted that she be able to park her car in the garage, and that his tools should have homes when he wasn’t in the middle of a project. When he was in the middle of something, the garage could look a little chaotic, but it was never cluttered. Everything had a purpose and a home. The garage was designed to fit the needs and constraints of his environment.

Unfortunately, when creating AI we too often think of factories rather than garages. In any factory the goal is efficiency at scale. To achieve efficiency, design is separated from production, and then production is tuned for peak performance. This performance tuning makes many humans in factories simply extensions of tools. To judge whether a factory is set up well, the key metric of production is velocity.

A factory approach doesn’t make sense for something as abstract and virtual as AI development. Compared to a physical factory, software production is cheap to change over and doesn’t require capital investment to be ripped out and replaced. And turning developers into high velocity code assembly lines wastes a huge opportunity to cultivate highly trained, creative, innovative people.

An alternative is to approach AI development similar to the way my dad approached woodworking in his garage. A developer is not an executor of code but a creator. Tools exist to affect the creator’s vision, and the vision adapts based on the productive experience. Design and production work in tandem. The goal isn’t peak performance; it is innovation. The key metric is achievement.

You can recognize this “garage model” when you see people creatively building toward a project or goal. we invest time upfront making sure we all understand and can articulate the goal of project — the thing we are going to build. AI is more than code and technologies; it is an approach to problem solving. It’s a good approach that we think more people should use, but it’s still just a means to an end. The goal is what matters. When my dad started projects in his garage, he didn’t incrementally explore his way to a finished piece of furniture. He had a piece of furniture in mind and an initial plan of how he was going to make it.

Artificial Intelligence And Surveillance – Where Do We Draw the Line | Robot background, Computer robot, Artificial intelligence

The Applied AI Center of Excellence

When it comes to AI, a garage isn’t only a physical place. In the Applied AI CoE we run garages with teams of people sitting all over the world. A small AI garage will have a leader and a team of three to eight people. Larger garages will see that pattern fractal or reorganize outward to handle greater complexity. A key thing I have had to remember as an AI garage leader is that my role is not to direct work or control the ideas. This would create a factory and stymie innovation. Instead, my role is to set the initial vision or goal of a project and then prune ideas to maintain focus — in other words, my biggest contribution is to keep the garage clean. For me and other garage leaders, this can be difficult — especially if the leader was the one who originated the idea, but even when that was not the case, it can be hard to let go. Success belongs to the team; failure belongs to the leader. It’s natural to want to control away failure, but then the garage model would be lost.

This distinction between the factory and the garage is critical — performance vs. innovation. In a garage model, the people developing AI are centered in the process, and this creates a foundation for a system that reinforces human-centered AI. By increasing the number of people who have a personal stake in how AI is developed, we create an AI that has a stake in the people who use it.

What can a garage do for human-centered AI?

We have used AI garages to do such things as create apps that help people fight decision fatigue, recognize when someone is paying attention or is distracted, use the weaker constraints of the virtual world to reconnect people to the physical world, and create AI Starter libraries to share what we’ve learned.

These examples show that we believe effective AI capabilities don’t push people to the side. Instead, they place humans at the center, augmenting what people can do and how well they can do it. We achieve these things because our AI development model, the “garage model,” is similarly human-centered.

AI’s Possibilities in Healthcare: A Journey into the Future

Artificial intelligence in health care

Artificial intelligence (AI), machine learning and deep learning have become entrenched in the professional world. AI-style capabilities are being embraced and developed globally (over 26 countries/regions have or are working on a national AI strategy) for many different purposes — from ethics, policies and education to security, technology and industry, the scope is broad and multi-faceted. If, like many others, you are unclear as to what this new terminology means, below is a diagram depicting the hierarchy of AI, machine learning and deep learning for you to consider. In healthcare, the opportunities are vast and significant. Just from a financial point of view, AI has the potential to bring material cost savings to the industry.

But where should you start, and where do the opportunities lie?

AI And Human Accountability In Healthcare

Where to start with AI

First, look at where money is invested — in other words, which start-ups are attracting investors and what is their focus. Rock Health (the first venture fund dedicated to digital health) shows that the top four areas for venture capital investment between 2011 and 2017 were research and development, population health management, clinical workflow and health benefits administration. More than $2.7 billion was invested over 6 years, across 206 start-ups.

Another venture capital and digital health community, Startup Health, which also keeps track of global investments, found that funding is doubling every year for companies which use machine learning technology to enhance health solutions. The companies that focused on diagnostics or screening, clinical decision support and drug discovery tools received the largest share of funding for machine learning in 2018 — i.e., $940 million.

Delving into AI’s opportunities

Perhaps the biggest opportunity lies in assisted robotic surgery, with a potential cost saving of US$40 billion per year. AI-enabled robots can assist surgical procedures by analyzing data from pre-op medical records and past operations to guide a surgeon’s instrument during surgery and to highlight new surgical procedures. The potential benefit to the healthcare organization and the patient from this approach is noteworthy: a 21 per cent reduction in length of hospital stay because robotic-assisted surgery ensures a minimally invasive procedure, thus reducing the patient’s need to stay in the hospital longer.

Surgical complications were found to be dramatically reduced, according to one study into AI-assisted robotic procedures involving 379 orthopedic patients. Robotic surgery has been used for eye surgery and heart surgery. For example, heart surgeons have used a miniature robot, called the Heart Lander, to carry out mapping and treatment over the surface of the heart.

Another valuable use of AI is in virtual nursing assistants. One example is Molly, an AI-enabled virtual nurse that has been designed to help patients manage their chronic illnesses or deal with post-surgery requirements. According to a Harvard Business Review article, assistants like Molly could save the healthcare industry as much as US $20 billion annually.

Diagnosis is another exciting development for AI, with some promising findings on the use of an AI algorithm to detect skin cancers. A Stanford University report found that deep convolutional networks (CNNs) performed as well as dermatologists in classifying skin lesions. Other exciting breakthroughs in AI-assisted diagnosis include a deep-learning program that listens to emergency calls, analyses what is said, tone of voice and background noises to determine whether the patient is having cardiac arrest. Astonishingly, a study from the University of Copenhagen found the AI assistant was right 93% of the time, compared with 73% of the time for human dispatchers.

A fourth potential use for AI lies in digital image analysis, which could help to improve future radiology tools. In one example, a team of researchers from MIT developed an algorithm to rapidly register brain scans and other 3-D images. The result reduces the time to register scans with accuracy comparable to that of state-of-the-art systems.

With so much potential to be gained from AI, healthcare organizations will need to enhance their skills in AI and related capabilities. Decision-makers need to inform themselves about the potential and what is required to achieve those objectives, and then ensure that their teams are properly trained. Culture change in understanding how AI can be used to solve current and future problems is paramount to the future of next-generation healthcare and life sciences organizations.

AI in Transportation

AI in Transportation – Current and Future Business-Use Applications | Emerj

Why AI?

You may have heard the terms analytics, advanced analytics, machine learning and AI. Let’s clarify:

  • Analytics is the ability to record and playback information. You can record the travels of each vehicle and report the mileage of the fleet.
  • Analytics becomes advanced analytics when you write algorithms to search for hidden patterns. You can cluster vehicles by similar mileage patterns.
  • Machine learning is when the algorithm gets better with experience. The algorithm learns, from examples, to predict the mileage of each vehicle.
  • AI is when a machine performs a task that human beings find interesting, useful and difficult to do. Your system is artificially intelligent if, for example, machine-learning algorithms predict vehicle mileage and adjust routes to accomplish the same goals but reduce the total mileage of the fleet.

If you’re in travel and transportation, here’s how to make sense of the terms analytics, advanced analytics, machine learning and AI.

AI is often built from machine-learning algorithms, which owe their effectiveness to training data. The more high-quality data available for training, the smarter the machine will be. The amount of data available for training intelligent machines has exploded. By 2020 every human being on the planet will create about 1.7 megabytes of new information every second. According to IDC, information in enterprise data centers will grow 14-fold between 2012 and 2020.

And we are far from putting all this data to good use. Research by the McKinsey Global Institute suggests that, as of 2016, those with location-based data typically capture only 50 to 60 percent of its value.  Here’s what it looks like when you use AI to put travel and transportation data to better use.

Lack of Action in Congress on Autonomous Technology Could Hinder States, Lawmaker Warns | Transport Topics

Here’s what it looks like when you apply industrialized AI in travel and transportation.

Take care of the fleet

Get as much use of the fleet as possible. With long-haul trucking, air, sea and rail-based shipping, and localized delivery services, AI can help companies squeeze inefficiencies out of these logistics-heavy industries throughout the entire supply chain. AI can help monitor and predict fleet and infrastructure failures. AI can learn to predict vehicle failures and detect fraudulent use of fleet assets. With predictive maintenance, we anticipate failure and spend time only on assets that need service. With fraud detection, we ensure that vehicles are used only for intended purposes.

AI combined with fleet telematics can decrease fleet maintenance costs by up to 20 percent. The right AI solution could also decrease fuel costs (due to better fraud detection) by 5 to 10 percent. You spend less on maintenance and fraud, and extend the life and productivity of the fleet.

Take care of disruption

There will be bad days. The key is to recover quickly. AI provides the insights you need to predict and manage service disruption. AI can monitor streams of enterprise data and learn to forecast passenger demand, operations performance and route performance. The McKinsey Global Institute found that using AI to predict service disruption has the potential to increase fleet productivity (by reducing congestion) by up to 20 percent. If you can predict problems, you can handle them early and minimize disruption.

Take care of business

Good operations planning makes for effective fleets. AI can augment operations decisions by narrowing choices to only those options that will optimize pricing, load planning, schedule planning, crew planning and route planning. AI combined with fleet telematics has the potential to decrease overtime expenses by 30 percent and decrease total fleet mileage by 10 percent. You cut fleet costs by eliminating wasteful practices from consideration.

Take care of the passenger

The passenger experience includes cargo — cargo may not have a passenger experience directly but the people shipping the cargo do. Disruptions happen, but the best passenger experiences come from companies that respond quickly. AI can learn to automate both logistics and disruption recovery. It can provide real-time supply and demand matching, pricing and routing. According to the McKinsey Global Institute, AI’s improvement of the supply chain can increase operating margins by 5 to 35 percent. AI’s dynamic pricing can potentially increase profit margins by 17 percent. Whether it’s rebooking tickets or making sure products reach customers, AI can help you deliver a richer, more satisfying travel experience.

Applied AI is a differentiator

If we see AI as just technology, it makes sense to adopt it according to standard systems engineering practices: Build an enterprise data infrastructure; ingest, clean, and integrate all available data; implement basic analytics; build advanced analytics and AI solutions. This approach takes a while to get to ROI.

But AI can mean competitive advantage. When AI is seen as a differentiator, the attitude toward AI changes: Run if you can, walk if you must, crawl if you have to. Find an area of the business that you can make as smart as possible as quickly as possible. Identify the data stories (like predictive maintenance or real-time routing) that you think might make a real difference. Test your ideas using utilities and small experiments. Learn and adjust as you go.

It helps immensely to have a strong Analytics IQ — a sense for how to put smart machine technology to good public use. We’vefit built a short assessment designed to show where you are and practical steps for improving. If you’re interested in applying AI in travel and transportation and are looking for a place to start, take the Analytics IQ assessment.

The MLOps principles for AI Development

Automation & AI – Network Software & Technologies

Many companies are eager to use artificial intelligence (AI) in production, but struggle to achieve real value from the technology.

What’s the key to success? Creating new services that learn from data and can scale across the enterprise involves three domains: software development, machine learning (ML) and, of course, data. These three domains must be balanced and integrated together into a seamless development process.

Most companies have focused on building machine learning muscle – hiring data scientists to create and apply algorithms capable of extracting insights from data. This makes sense, but it’s a rather limited approach. Think of it this way: They’ve built up the spectacular biceps but haven’t paid as much attention to the underlying connective tissues that support the muscle.

Why the disconnect?

Focusing mostly on ML algorithms won’t drive strong AI solutions. It might be good for getting one-off insights, but it isn’t enough to create a foundation for AI apps that consistently generate ongoing insights leading to new ideas for products and services.

AI services have to be integrated into a production environment without risking deterioration in performance. Unfortunately, performance can decline without proper data management, as ML models will degrade quickly unless they’re repeatedly trained with new data (either time-based or event-triggered).

Professionalizing the AI development process

The best approach to getting real and continuous value from AI applications is to professionalize AI development. This approach conforms to machine learning operations (MLOps), a method that integrates the three domains behind AI apps in such a way that solutions can be quickly, easily and intelligently moved from prototype to production.

What is MLOps? | NVIDIA Blog

AI professionalization elevates the role of data scientists and strengthens their development methods. Like all scientists, these professionals bring with them a keen appreciation for experimentation. But often, their dependence on static data for creating machine learning algorithms –which they developed on local laptops using preferred tools and libraries – impedes production AI solutions from continuously producing value. Data communication and library dependency problems will take their toll.

Data scientists can continue to use the tools and methods they prefer, their output accommodated by loosely coupled DevOps and DataOps interfaces. Their ML algorithm development work becomes the centerpiece of a highly professional factory system, so to speak.

Smooth pilot-to-production workflow

Pilot AI solutions become stable production apps in short order. We use DevOps technology and techniques such as continuous integration and continuous delivery (CICD) and have standard templates for automatically deploying model pipelines into production. By using model pipelines, training and evaluation can happen automatically if needed – when new data arrives, for instance – without human involvement.

Our versioning and tracking ensure that everything can be reused, reproduced and compared if necessary. Our advanced monitoring provides end-to-end transparency into production AI use cases (including data and model pipelines, data quality and model quality and model usage).

Using our innovative MLOps approach, we were able to bring the pilot-to-production timeline for one U.S. company’s AI app down from six months to less than one week. For a UK company, the window for delivering a stable AI production app shrank from five weeks to one day.

The transparency of AI solutions, and confidence in their agility and stability, is critical. After all, the value lies in the ability to use AI to discover new business models and market opportunities, deliver industry-disrupting products and creatively respond to customer needs.

Era of AI in Cybersecurity

Artificial Intelligence to revolutionize cybersecurity

Palo Alto Networks study highlights preference for AI management of cyber  security – Risk Xtra

Cyber attacks are increasing rapidly these days and the trend for zero-day attacks is also not so unknown. To cope up with these evolving cyber threats, it is the need of the hour to be prepared with more advanced counter mechanisms. This is where AI in cybersecurity comes into play.

These days there are tools and security devices that use AI to make the attack detection and prevention process easy and automated. AI in cybersecurity helps to bring out the concepts of behavioral analysis, automation, and many more that help to create a new space in the field.

Role of AI in cybersecurity

AI has opened new horizons and opportunities to detect and mitigate cyberattacks. Every day multiple cyberthreats are born and increase the attack surfaces of the firm. AI in cybersecurity helps to delve deeper into the key areas to find the threats and adjust itself in a suitable way to mitigate them.

AI can identify and prevent cyberattacks

AI has lots of reference modules and predetermined attack engines that helps the user to detect the inbound cyber attacks easily. Some attackers use predefined scenarios, methodologies, and techniques to attack websites and applications. By using AI-based detection techniques, it will be easy for the user to identify the attacks. Once the ongoing attacks are identified, you can add some of the pre-requisites in the AI engine that will help you to mitigate the same.

The automation of cyberattacks

The Real Challenges of Artificial Intelligence: Automating Cyber Attacks |  Wilson Center

AI in cyberspace is rapidly growing and is both boon and bane for the industries. Whereas on one hand, the application of AI in cybersecurity helps to automate the process for mitigation of cyber threats, it also helps malicious actors to create automated cyberattacks. These attacks are pre-programmed based on the analysis of threat vectors of the organization and attack the same in various ways.

The latest research shows that the threat landscape is increasing these days due to the presence of the open-source AI-enabled hacking tools and software. Within the report, the cybersecurity firm documented three active threats in the wild which have been detected within the past 12 months. Analysis of these attacks — and a little imagination — has led small attackers like script kiddies and newbies to create scenarios using AI which could be more dangerous and threatening.

Impact of AI in cybersecurity space

The presence of AI in the cybersecurity space has opened new horizons for attackers and defenders. The landscape of cyberspace is changing its demographics due to the presence of AI, which proves to be uncertain and unbiased. Sooner or later it is going to be the key differentiator between both the veils.

The AI has helped the cybersecurity researchers and continues to do the same in all the way possible.

The presence of the AI has impacted the cyberspace on the following grounds:

  • Identification of the threat
  • Mitigation of the threat
  • Vulnerability assessment of the organization
  • Constant monitoring of the organization’s threat posture
  • Helps in reporting and accounting of cyber threat of the firm

 

error: Content is protected !!