Soon, France will have a new leader. If it’s Marine Le Pen, then all hell will break loose. I bet you’d like to know the outcome – before it’s announced.
Today, we’ll find out how that might be possible.
Many investments are predicated on election results. Politicians change, policies change and the public mood changes. For example: a friend of mine invested heavily in a US healthcare business, serving mainly gay clients. It’s a successful operation, and expansion was underway. However, within days of the Donald Trump victory, an angry mob of protesters had overrun their construction site and destroyed their building works. Polls can’t predict mobs, and they can’t even predict policies, but they should be able to predict the elections that underpin such changes.
However, the pollsters have recently let us down. The public in various countries have surprised many by their switch to radically populist voting patterns.
Who can you rely on in this world of turmoil?
Today I’ll be interviewing Vuk Vukovic, the director and co-founder of Oraclum Intelligence Systems. This small startup is based in Cambridge, UK – and it’s all about predicting election results. But why should you listen to him, rather than the usual pundits? Well, I’ll let him explain…
AL: Can you start off by telling me a bit about Oraclum?
VV: Oraclum is a company that uses the power of social networks and big data to predict election outcomes, and uncover patterns of consumer behaviour. We have developed a unique, science-based forecasting method that is able to predict elections with amazing precision. Our team is a group of scientists with backgrounds in physics, computer science, economics and politics.
AL: What’s the problem you’re trying to fix?
VV: 2016 was not a good year for pollsters. Brexit and Trump confounded predictions. It wasn’t only the pollsters. All the mainstream poll-based forecasters, prediction markets, betting markets – they all got it wrong. They all estimated high probabilities for a Hillary Clinton victory – and similarly for a rejection of Brexit earlier last year.
Our firm got both of these right. It was among the rare few to see it coming. We predicted a Trump victory with amazing precision, correctly predicting 47 out of 50 states (including Pennsylvania, Florida, North Carolina, Ohio and the major swing states). We also anticipated that Clinton would win the popular vote, but still lose the election.
AL: Tell us more about how it works?
VV: We managed to develop a unique, science-based prediction method to accurately predict election results, using the power of social networks. We tested our method on Brexit first and got great results. We then kept improving it for the US election where we successfully predicted a Trump victory.
AL: What were your predictions for Brexit and Trump?
VV: On Brexit, we used six different methods, and our best method gave us 51.3% for Leave, 51.9% being the actual outcome. We could see three days before the referendum that Brexit was likely to happen. We however did not go public with this result – because we wanted to test the method further. After Brexit we carefully adjusted the method for the US election and used the one that gave the most accurate Brexit prediction.
AL: Did you predict Trump’s victory the same way?
VV: The same method that gave us that result for Brexit was the one that correctly predicted Trump, calling all the major swing states in his favour: Pennsylvania (which no single pollster gave to him), Florida, North Carolina and Ohio. We correctly gave Virginia, Nevada, Colorado and New Mexico to Clinton, along with the usual Red states and Blue states to each. We only missed three – New Hampshire, Michigan and Wisconsin (although for Wisconsin we didn’t have enough survey respondents to make our own prediction, so we had to use the average of polls instead).
Therefore, the only misses of our method were actually Michigan, where it gave Clinton a 0.5 point lead; and New Hampshire where it gave Trump a 1 point lead. Furthermore, we also predicted that Hillary might get more votes but still lose the election. Overall our method was on average within a single percentage point for the key swing states. For example, in Florida we estimated 49.9% to Trump vs 47.3% to Clinton. In the end it was 49.1 to 47.7. In Pennsylvania we have 48.2% to Trump vs 46.7 for Clinton (it was 48.8 to 47.6, in the end). In North Carolina our method said 51% to Trump vs 43.5% for Clinton (Clinton got a bit more, 46.7, but Trump was spot on at 50.5%).
AL: That’s impressive. Were you nervous publishing a prediction that was so different from the established pollsters?
VV: Oh absolutely. Particularly given that most US pollsters and poll-based forecasters have so far done such a good job of predicting elections.
AL: Did you ever think that maybe other pollsters are right?
VV: At one point we started doubting our own results. We started thinking that we must have made an error with the model. Particularly since already on Sunday it was becoming clear to us that Hillary would lose Pennsylvania, a state that was supposed to be her stronghold in this election. We double-checked several times, and the numbers were right – Trump had a narrow lead in all the key swing states.
AL: Why were the polls so wrong over Brexit and Trump?
VV: First of all, opinion polls are incorrectly perceived as predictions of elections. They are just snapshots of the voters’ attitudes at a given point of time. However, taken together, they should point to an obvious trend from which one can make inferences about the potential outcome. But the pollsters are facing difficulties in catching such trends and assembling representative samples in general. Take telephone surveys, for example. Very few people use a landline phone nowadays, and most people do not respond to mobile phone surveys as eagerly as they once did to landline surveys.
AL: What about online polls?
VV: Online polls have their own problems. These include proper sampling and self-selection of respondents. This makes them biased towards particular voter groups – like young, better educated, urban populations. Pollsters are trying to compensate for these biases, by adjusting their results for various socio-demographic characteristics. However the final result is still dubious, as shown in Florida during the US election campaign. Here, four different pollsters gave four different results based on the same data set. Also, a recent study showed that the actual margin of error is about 7%, instead of the typically-reported 3%.
AL: Ok, back to your method. How does it work? Do you use a survey? What data do you collect?
VV: Yes, we use an online survey – meaning that we need to create our own data by asking participants three main questions. In addition to the usual, “Who will you vote for?”, we add an additional two. These are: “Who do you think will win and by how much in your state/region?” and “How do you think other people in your state/region will answer the previous question?” Then we assemble their responses to get our “first draft” of the results.
AL: So this is essentially a “wisdom of crowds” concept? Hasn’t this been tried already?
VV: Exactly, this part is pure wisdom of crowds. Some also call it citizen forecasters. And yes, it has been tried before. There are even academic papers exploring the idea in more depth. However, even having this piece of information is still not enough to make a good prediction.
AL: Why is that?
VV: Because people can fall victim to group bias if their only source of information are polls and like-minded friends. For example, you can live in a liberal bubble – where everyone around you thinks that Hillary will win for sure. Likewise, you may think that there is no chance of Brexit happening, because only bigots vote the opposite. This is why we need social networks – to overcome this effect.
Using Facebook and Twitter, we can recognise which group is internally-biased and lives within its own little bubble. People living in such groups only see one version of the truth – their own. This means they’re likely to be bad forecasters. On the other hand, people living in more diverse groups are exposed to both sides of the argument. This means they are likely to be much better forecasters, so we value their opinions more.
AL: So in summary, you analyse people’s social networks to pick out the potentially better forecasters – and then ask them who will win?
VV: Yes, pretty much. We build a network of friends and interactions. We do this anonymously – we don’t take any personal information from our participants or their friends. Just their answers in the survey. This is done to see their voting pattern, and thus measure their group bias. Then we take this bias to adjust the group’s prediction. This is the crucial piece of the puzzle – it’s why the method works.
AL: What prevents others from doing something like this?
VV: The citizen forecaster model is long established. However, the method we have developed to analyse networks for groupthink is unique to us. We wanted to publish a scientific paper about it, but decided to try and make some money out of it first!
AL: How do you intend to monetise this?
VV: We’re in the business of selling information before anyone else. Imagine I come to you three days before Brexit and tell you that the markets are wrong, the polls are wrong and that Brexit was about to happen. Investors would pay a lot of money for that piece of information – provided they trusted it. So what we sell is first-mover advantage.
This is where we are at the moment. We have a first mover on the market for predictions. We are a potential disruptor on the market, as we are cheaper than regular pollsters. We’re also more precise than the markets, the bookies and the poll-based forecasters. We’re the first to realise the potential of using such a method.
Our business model is based on satisfying a diverse client base, connected by a need for early insight into likely election results. On one hand this includes the finance and hedge fund industries, which may use our exclusive information a few days before the election to hedge against any anticipated expectations. Additionally, the betting industry can also benefit from our prediction to adjust their odds – and thus save a lot of money. The bookies will benefit particularly if the result is likely to be contrary to conventional wisdom.
We can also emulate the standard pollster model of selling our polling results to the media. However, here we face considerable competition – and we can only hope to penetrate this market when we have proved to be good enough over time. Finally, we can sell our services to any company that wants to market its product and find out how the people would react to it. It’s not just about elections!
AL: What are your plans for the future?
VV: This year we have two major elections coming up: the second round of the French presidential elections in May; and the German parliamentary elections, in October. In addition to this we are expanding our business model to market research, and have already started doing predictions of consumer behaviour for our private sector clients – as well as optimal pricing, etc.
AL: You touched on alternative applications a moment ago. Could you please expand on this?
VV: Elections are just the beginning. We believe that the potential for something like this is limitless. Essentially we can expand to any area that needs to anticipate human behavior and decision-making: from betting markets, to hedge funds; and from psychology research, to companies wanting to predict whether a product will succeed. Elections were just a test for the precision of the method. Once we have established that this works, we started capitalising on our advantage.
AL: Finally, tell me more about your team. Who’s behind Oraclum?
VV: There are three of us leading the team. A computer scientist, Dr Mile Sikic, who is a professor of bioinformatics and data mining at the University of Zagreb and in Singapore; an astrophysicist, Dr Dejan Vinkovic, with a post-doc from the Institute for Advanced Studies in Princeton; and myself, a political economist doing a DPhil in Oxford. We also collaborate with a number of researchers, who are helping us with coding and network analysis. We also work with a company, UX Passion, doing the design and user interface for our surveys.
AL: How did you meet each other?
VV: We met at a conference organised by Dejan back in 2014, and discovered that we all experimented with election forecasting models on our own. We then joined forces to do an election coverage and forecasting project for the Croatian elections in 2015. We were hired by a national domestic newspaper. We applied the standard poll-aggregation approach in the project, but we tested our prediction method on the side, in silence, and saw that we had some great potential. That’s when we decided to start a company.
I’d be delighted to hear your thoughts on this. Please do write in with your views. But more importantly, please tell us what you believe other readers will think of this article: firstname.lastname@example.org.