Machine Learning VS You

Have you ever looked at your computer or phone in awe, and considered the possibility that it may be smarter than you? Although the philosophical debate surrounding the nature of intelligence has waged on for decades, the advent of machine learning has caused it to suddenly resurface. After all, when a computer can comb through years of company data and solve a complex problem within seconds, it is hard to not give heed to the argument that technology is smarter. Regardless of whether or not intelligence can be measured, the final answer is that neither is smarter, and both must work effectively together in order to find solutions to tomorrow's problems. Follow the Rippleshot Team as we discuss the origins of machine learning, its implications for the future, and how you can leverage its power to benefit your institution.

Who are you and where are you from?

The concept of machine learning can trace its roots back to classical statistics, where techniques were developed between the 18th and early 20th century to make inferences on small data sets. At the time, statistics as a study was limited by the size of data sets available and processing power of computers.

Although it can be argued that much of the foundation for machine learning came from classical statistics, computational neuroscience was a missing puzzle piece discovered in the 1940s-1950s. Alan Turing, fascinated by artificial intelligence, was one of the first to question the intelligence of machines. In 1950, he developed the “Turing Test” as a determination of real intelligence for a computer. In order to pass the test, the computer would have to fool a human into believing that it was also a human.

Turing’s work, along with many other statisticians and technologists, sparked a series of developments in machine learning, such as that of Arthur Samuel, the IBM scientist who wrote the first learning program that could play checkers. Through a repetition of games against itself and other players, the program developed winning strategies and improved its overall performance, making it the first program to “learn by itself”. Further progress continued with Frank Rosenblatt’s introduction of the first artificial neural network known as the perceptron, which promised the ability to mimic thought processes of the brain. Although the concept of neural networks was discontinued after Marvin Minsky and Seymour Papert found limitations within Rosenblatt’s system (such as XOR) in 1969, it has reemerged as a prominent topic over the years.

A Waiting Game

Even after machine learning models had been built and validated on small data sets, they could not be executed to their fullest potential due to hardware bottlenecks. So until the late 1970s and early 1980s, machine learning practitioners had to patiently wait their turn to test their models. Nevertheless, the 1990s saw machine learning turn over a new leaf, as work on it shifted from a knowledge-driven approach to a data-driven one.

As human understanding of machine learning advanced quickly through the steep learning curve, innovation started taking place at every corner. In 1997, IBM celebrated its win over the reigning world chess champion, Garry Kasparov, using its computer Deep Blue. A flurry of victories for machine learning followed, including the introduction of Google Brain and X Lab, Facebook DeepFace, and most recently, the triumph of Google’s AlphaGo against professional player of Chinese board game Go in 2016.

In no way are Google and Facebook alone. Traditional businesses are quickly catching on to the benefits of implementing machine learning into their strategies, and virtually every industry is fair game for disruption. For example, this past spring, the US National Basketball Association championship relied on data analytics provided by Second Spectrum, a California based startup. By digitally recording player stats and data, the company created predictive models that allow coaches to, as CEO Rajiv Maheswaran puts it, distinguish between “a bad shooter who takes good shots and a good shooter who takes bad shots”. If that isn’t “traditional” enough for you, consider General Electric (GE), who has been collecting and leveraging data for everything from deep-sea oil wells to jet engines in order to maximize performance, predict future breakdowns, and streamline processes.

Great... so what about me?

At this point, you’re probably asking yourself- machine learning seems great and all, but how can I start using it? When it comes to buy-in from employees and the C-suite, it is crucial to see machine learning as an implementation tool for strategy. Similar to mergers and acquisitions (M&A), top management should make commitments to considering all possible options, pursuing the decided strategy with full vigor, and using or acquiring the necessary knowledge for strategy execution.

As explained in An Executive’s Guide to Machine Learning by Dorian Pyle and Cristina San Jose, in order to bring machine learning to life within a company, there are two types of people needed. “Quants” have a deep understanding of machine learning and its methods, whereas “Translators” know how to refashion a quant’s convoluted results into actionable insights that managers can act upon.

Effective machine learning is heavily dependent on large scores of data, meaning that the data strategy is another key element. In order to formulate a strong data strategy, it is essential for companies to identify gaps within their data, determine the costs associated with filling such gaps, and break down any existing silos. Also, making data collection a responsibility for front-line managers, starting small, and broadcasting company victories with machine learning goes a long way in generating employee buy-in.

Description, Prediction, Prescription

Description consists of companies compiling their data into databases, a process that many companies have already been through. However, the prediction stage is more urgent, and also more sophisticated, as machine learning can offer significantly clearer predictions than, for instance, a classical regression model. This stage can prove the most stressful for executives, as concerns over the quality of data start to come to the forefront. Since the search for data is endless, it becomes a question of whether or not new data sources are needed and how much benefit they can really provide to the end result, a question a “Chief Data Scientist” is usually brought in to answer.

Finally, prescription is the most advanced stage of machine learning, and requires the most attention by management to succeed. Predicting what customers are going to do is one thing, but understanding why is exceedingly more difficult, yet exponentially more lucrative. Today’s machine learning algorithms are constantly being developed to answer such bottom-line questions for companies, but man-machine collaboration will continue to be the fuel that sparks the engine. Ultimately, there is no machine learning versus you. It’s machine learning AND you.



Want to know how Rippleshot uses machine learning to save financial institutions money and customers?

Click below to learn more!

Schedule Your Demo
Share

Request a Product Tour

You have fraud frustrations? We have the solutions. Let's discuss what you are dealing with and we can learn more and share how we can help.

Topics
Three blue ellipsis's