Monday, August 6, 2018

4 Artificial Intelligence Use Cases That Don’t Require A Data Scientist

By Siddhartha Agarwal
I meet with a lot of business and tech leaders, and nearly all of them ask at some point about artificial intelligence. They’re worried that their company is missing out on this coming AI revolution, and falling behind rivals because they don’t have the deep tech skills to put it to use.
I tell them that getting real value from AI, and from its related discipline of machine learning, doesn’t have to be that hard.
iStockphoto
The reason being they can tap into AI embedded within cloud services, which they can quickly launch and put to use. Here are four AI use cases I give as examples of how they can quickly tap the benefits of AI without much work—and without an army of data scientists.
Chatbots and Natural Language Processing
Nature language processing (NLP) is a branch of AI that can understand spoken language, figuring out the intent of the speaker and offering an appropriate response. Chatbots use NLP to let customers have a back-and-forth conversation to ask questions and get information. The US utility Exelon, for example, is using a chatbot to let people report outages or get billing information. And Exelon built its pilot chatbot in just two weeks. The Indian appliance maker Bajaj Electricals uses chatbots to let customers request a demo, set up a technician appointment, or report a problem.
By using AI, chatbots leave customers with a much better taste than simplistic, automated phone systems that follow a script. That’s because chatbots can understand intent even if a customer doesn’t use the exact words in a script. A person creating a banking chatbot might set it up to respond to “What is my checking balance?” and the bot will know, or learn, that “How much do I have in checking?” is the same question. Also, an AI-powered chatbot can maintain the context of a conversation. So if I ask a banking chatbot, “What’s my checking balance?” and my next request is, “Send my mom $100,” it knows that you’re probably sending that from your checking account.
And finally, chatbots get smarter with time, thanks to machine learning. As customers ask questions, again and again, a smart chatbot platform uses that data to refine its understanding of the
intent and its responses.
Monitoring Your Data Center
Today, your IT operations team likely spends a huge amount of time and mental energy tending to performance thresholds—for example, when an application slows down too much, the system generates an alert. But as the application code, the configurations, or the infrastructure change, the ops team must constantly reset and manage those thresholds. The amount of monitoring data generated is also growing significantly, which means the IT ops team is doing a lot of work just managing logs, which provide the data for setting thresholds.
A better way is to put all the web, application, and database performance data, the user experience data, and the log data into one cloud-based data platform. Then let that system—using baseline-setting algorithms in machine learning—learn what the thresholds should be. With the baseline established, another technique called anomaly detection can identify when application performance is trending toward these thresholds, and trigger alerts with suggested corrective actions or automatically take corrective action. No more setting thresholds mean a significant time and cost savings.
Adani Ports & Special Economic Zone, India’s largest ports developer and operator, is embracing this predictive and automated approach for setting thresholds and taking corrective action. Adani runs global ports that operate around the clock, and its port management applications are vital to running the business. Predicting breakdowns before they cause delays brings a major competitive edge. Also, letting AI take over more of the routine monitoring and threshold setting, and centralizing that monitoring in a cloud-based system, helps Adani run a lean IT team focused on business issues rather than maintenance. And having predictable performance and highly available IT systems gives the business confidence to embrace new technology.
Analytics
Today, business analysts have to be fairly technical to run queries on masses of data. They have to write queries, generate data visualizations, and often move data to a location with the computing capacity to run heavy queries. All this leads to it taking a long time to get insight from data, sometimes to the point where the insight is useless.
Imagine you wanted to find out why attrition in your organization seems high. Ask five different analysts that question, and you’ll get five different strategies to figure it out. With artificial intelligence, people without much technical background can ask a question like “What is happening with employee attrition in my company?”  and the system can tell you what factors are most correlated to attrition. What makes this real AI is that those factors aren’t hard-wired in a software application—they’re not applying the same rules or formula to every company. Instead, using AI, a cloud service can look at your human resources data, and conclude that longevity at the company, salary level, equity compensation, and title are the best indicators of why people leave. Then it can segment the data and tell you which salary band has the most attrition. And using anomaly detection, AI can spot outliers, like if there is one department or manager within a salary band that has higher attrition than others. And finally, using predictive analytics, you can see which people might be most likely to leave.
Like any analysis, professionals need to take that data and apply their own perspective and experiences. Is that high attrition group paying below market rates, or suffering from poor management. AI isn’t a magical box of answers. But AI lets analysts start miles down the road from where they have in the past.
Valdosta State University, for example, is a public university in southern Georgia with more than 11,000 students, and it’s using predictive analytics to spot red flags that a student could be at risk of dropping out. When they see those red flags, the school assigns an “interventionist” from the school’s staff to help students through problems they’re facing.
AI Built into Applications
Perhaps the easiest way of all to tap into AI is to use AI capabilities that providers are building into their applications. Oracle calls this built-in capability “adaptive intelligence,” and it’s building it into its application portfolio of ERP, human capital management, marketing, supply chain, and more. These applications take first-party data within your software-as-a-service instance and third-party data that can come from external sources, and make recommendations. If you’re using marketing application, what’s the best offer to make next to a customer? In sales, what’s the best prospect to call next? In HR, among new hires who are thriving, at the company, what are the factors they had coming in, which might predict success among incoming candidates?
These four fast tracks to AI value aren’t the only ways companies will tap this opportunity, of course. As teams gain AI experience, they’ll embrace more sophisticated approaches, writing their own domain-specific algorithms that use machine learning to solve a need at their company. But that’s not where most people are today. They’re looking for—and finding—quick ways to use AI and machine learning to find growth opportunities and lower the cost of IT operations, which helps them fund innovation. The lesson from their experience: Don’t wait. AI success is within your grasp.
Siddhartha Agarwal is group vice president of product management and strategy for Oracle Cloud Platform.

Sunday, July 22, 2018

Facebook hopes to launch an internet satellite in early 2019

Facebook has cooperated on internet satellite initiatives (with less-than-ideal results), but there's been the precious little word of plans to make its own satellite beyond high-level promises. Now, however, there's something tangible. Both publicly disclosed FCC emails and a direct confirmation to Wired have revealed that Facebook aims to launch an internally developed satellite, Athena, sometime in early 2019. A spokesperson didn't share details, but the shell organization Facebook used to keep filings hidden (PointView Tech LLC) talked about offering broadband to "unserved and underserved" areas with a low Earth orbit satellite on a "limited duration" mission.
This is likely just an experiment rather than a full-fledged deployment. Low Earth orbit satellite internet would require a large cloud of satellites to provide significant coverage, similar to SpaceX's planned Starlink network. However, it shows that the company isn't done building its own high-altitude hardware now that it has stopped work on its internet drone.
Whatever Athena shapes up to be, Facebook's motives likely remain the same. As with Alphabet's Loon internet balloons, there's a strong commercial incentive to connect underserved regions. Even if Facebook doesn't charge a thing for access, it could benefit by adding millions of new users who'd view ads and use services (including through Instagram and WhatsApp). It would also look good to investors, as Facebook would keep its audience growing at a time when there's seemingly no more room to grow.

Saturday, July 21, 2018

Google's New Cirq Project Aims to Make Quantum Computers Actually Useful

A Hofstadter butterfly, a pattern describing how electrons behave in a magnetic field that’s been simulated by Google researchers. Graphic: Wikimedia Commons user Mytomi (Wikimedia Commons)
Last year, we reported that a new era of quantum computing is upon us: the NISQ, or Noisy Intermediate Scale Quantum era, in which quantum computers are still small and error prone, but they actually do something valuable. That second part is still somewhat aspirational, though, so companies like Google are offering frameworks so the public can develop useful algorithms for quantum computers.
Google this week announced Cirq, its open-source framework for these NISQ computers. The framework doesn’t run on a real quantum computer yet (just a simulation of one) but will hopefully lead to quantum computers finding some use.
“Cirq is focused on near-term questions and helping researchers understand whether NISQ quantum computers are capable of solving computational problems of practical importance,” Alan Ho and Dave Bacon, product and software leads from the Google AI Quantum Team, wrote in a blog post.
Quantum computers are devices meant to perform calculations the way that traditional computers do, but with a different set of ground rules. Classical computer algorithms must ultimately be translated into zeroes and ones. Quantum algorithms instead rely on the mathematics of quantum computers, where the most basic unit is more like a point on a sphere during the calculation, but a zero or one for the final result. These quantum bits, or qubits, communicate with one another through entanglement, the quantum mechanical idea where sets of multiple qubits are treated as mathematically indistinguishable units until the machine measures them.
These machines exist, but in their current state they are incredibly noisy, meaning they easily interact with nature and lose their quantum-ness, essentially becoming regular computers. Some physicists think that existing quantum computers are becoming just complex enough to be useful, and could potentially solve some problems better than classical computers can. Researchers still must find out what problems those are—what problems could actually benefit from these noisy, intermediate-scale devices.
Google’s Cirq joins a slew of other frameworks that allow programmers to run quantum circuits. IBM has public-facing, 20- and 16-qubit devices, which programmers can play with and research using the IBM Q experience. Startup Rigetti has a 19-qubit device accessible through its Forest programming environment. Then there’s D-Wave, which also offers consumer products, but its computer works differently from the rest of the competition (more on that here). Like Microsoft’s, Google’s framework is not built on real quantum hardware, but on a classical computer that simulates a quantum computer. Eventually, programmers will use Cirq to access Google’s upcoming 72-qubit Bristelcone processor.
Several startups have been testing Cirq prior to Google’s announcement. Quantum Benchmark, for example, offers what are essentially quantum diagnostic tools that can inform an end user about error rates in the quantum processor, and help to suppress those errors.
“Google has expertise that’s branded and recognizable, so it’s great for them to recognize the value that we’re bringing,” Quantum Benchmark CEO Joseph Emerson told Gizmodo.
One advantage of Google’s simulator is that users will eventually be able to run large-scale problems on it, said Matt Johnson, CEO of QCWare, a startup whose software allows clients to run quantum algorithms on multiple hardware platforms. “It’s going to allow our customers to exploit what’s going to certainly be one of the leading hardware systems in terms of power.”
Still, we’re definitely in the early stages of this tech. Sydney Schreppler, postdoctoral fellow in physics and UC Berkeley, told Gizmodo that NISQ was a “hopeful” term. “The hope is that industry-academia collaborations may result in some useful applications for the quantum processors that already exist in academic labs and at companies like Google, IBM, and Rigetti.” She said a new algorithm generated, or a link with actual hardware like Bristelcone, would be more exciting news, but that “cool new applications for existing hardware could be coming down the line.”
Development continues, with companies hoping to eventually find industry uses for these noisy quantum computers. But programmers and scientists have barely scratched the surface of quantum usefulness.

4 Artificial Intelligence Use Cases That Don’t Require A Data Scientist

By Siddhartha Agarwal I meet with a lot of business and tech leaders, and nearly all of them ask at some point about artificial intelli...