Policy Analytics: Realtime Policy Micro-Experimentation / by Justin Longo

Justin Longo
An address to the Saskatchewan Economics Association Annual Conference
Regina, Saskatchewan
Thursday October 27 2016

Abstract

The basis of the policy analysis movement, now in its 65th year, has been that public policy responses and interventions should be better positioned to address public problems if they are based on the best available evidence, rather than policy based on anecdote, informal beliefs, inaccurate data, or partial data. During its history, policy analysis has usually relied on data collected at discrete intervals: once a policy choice was determined, data collection would follow the implementation of those policies to evaluate whether they had the desired effect. Policy analytics, in contrast, represents the combination of new data sources – e.g., from search and social media, to mobile smartphones, Internet of Everything (IoE) devices, and electronic payment cards – with new data analytics techniques for informing and directing public policy. Rather than implementing large-scale policy changes based on a best-case understanding of the system, policy analytics offers the possibility of small scale experimental policy interventions that can be piloted and their effects precisely observed in realtime. This “big data” + data analytics approach to policy formation can dramatically change its traditional practice involving the discrete stages of policy formulation, implementation, and evaluation, into a continuous experimental cycle of measurement, intervention, observation, calibration, and adjustment. This presentation will make the case for a policy analytics approach of realtime policy micro-experimentation by describing emergent examples of this approach from around the world.

What would you say if I told you that, over the course of this talk, I was going to use a mix of advanced technology, strong analytical skills, some pre-planning, and a bit of Wizard of Oz stagecraft to help ensure that what I’m saying to you is interesting, to keep you engaged and riveted by what I have to say, and avoid as much as possible you being bored and anxiously waiting for the afternoon coffee break? Through a combination of technology, analytics, and agility, I’m going to adjust the content of my talk in response to your actions so that I’ll minimize and reverse any drift into boredom, and strive to always command your full and rapt attention and tailor this talk to what you as an audience are interested in.

I’ll let you in on the secret of how I’m going to do this, which involves some fairly sophisticated wireless scanning technology to observe what you are using your mobile devices to do (whether it’s to check your email, look up the pitching matchups for game 3 tomorrow, find out whether Morpheus actually ever said “what if I told you …”, or do some quick banking), drawing that data into a analytical framework that provides a realtime assessment of whether my words as delivered from this stage are resonating with you or are instead leading to boredom and distraction, the results of which are being interpreted by my colleague Yassine El Bahlouli here who in turn is communicating the key points from his assessment to my computer screen using some pre-arranged cues that guide my reformulation of my presentation as I’m going through it. As those changes are made, Yassine is able to assess whether they are having the desired effect as measured through your use of your mobile devices, and communicate that assessment back to me so I can continue on or make further changes.

Now this is an example of what we are calling “policy analytics” which involves the use of large volume data capture from the range of powerful Internet-connected devices that exist all around us (in this case, your use of mobile devices connected to the hotel’s wifi and the outside cellular network), coupled with advanced data analytics for making sense of all that data so we can better understand our environment in realtime as we are intervening in it, assessing how that intervention is working, adapt our intervention when it doesn’t meet our policy objectives (in this case, my objective of keeping you engaged in this presentation), to continually monitor, analyze, assess, and modify my approach.

How is this different from what we used to do to understand how a presentation, speech, or performance was going? In the past I would have had to rely on a combination of experience (how to structure a presentation based on what I’ve observed works for others, and what has worked for me previously), situational and self-awareness during the presentation to gauge whether it was working (and barring loud cheering and spontaneous bouts of applause, this could be difficult), and - what was most common, but often least helpful - the post-event evaluation form. And that difference between these approaches illustrates, I think, how different policy analytics is from policy analysis.

Consider how we have done policy analysis for the past 65 years, an approach that has been described in “the policy cycle”, with a sequence usually characterized as involving stages like problem definition, analysis, solution identification, decision making, implementation, and evaluation, cycling back to future analysis. While we acknowledge that there are shortcuts and feedback loops, we generally think of the policy cycle as having these various discrete stages. When undertaken in practice, policy analysis has usually relied on data collected at discrete intervals: collect data so as to better understand the problem; collect evidence to inform the analysis and solution choice; once a policy choice was determined, data collection would follow the implementation of those policies to evaluate whether they had the desired effect.

Policy analytics, in contrast, represents the combination of new data sources – e.g., from search and social media, to mobile smartphones, Internet of Everything (IoE) devices, Internet-connected sensors, and electronic payment and transaction cards – with new data analytics techniques for informing and directing public policy. It involves the bringing together of all the discrete stages of the policy cycle into one continuous environment of observation, intervention, and modification. The illustration I just gave you of policy analytics - with audience analysis and realtime changes in intervention - is as different from traditional approaches to audience assessment and presentation as new approaches to policy analytics are to the long tradition we all know.

There’s another big difference between policy analysis and policy analytics that I want to draw your attention to, and it’s the opportunity for small scale experimentation that this new workbench provides. Our traditional approach to policy formation has been oriented towards the big initiative, roll-the-dice type policies where we analyze, assess, consider, think … and then decide. Partly because we are impatient and want to get on with doing things, and you can only think so much about a problem before you risk paralysis by analysis, when we make a decision we tend to make big decisions as in “here is the policy we are going to implement”, and we hope to god our analysis was right, and we’ll come back in a few years to evaluate how we did (but by then it will be someone else’s problem).

But rather than focus on implementing large-scale policy changes based on our best understanding of the system, policy analytics offers the possibility of small scale experimental policy interventions that can be piloted and their effects precisely observed in realtime. This data analytics experimental approach to policy formation can dramatically change its traditional practice involving the discrete stages of policy formulation, implementation, and evaluation, into a continuous experimental cycle of measurement, intervention, observation, calibration, and adjustment. And doing it at small scales and learning as we scale up offers the possibility of making small mistakes in the service of getting better as more people are brought in. Now, you can do policy experiments under the policy cycle approach, it’s just that it takes a long time to get results back from the field or pilot studies. And under a policy analytics approach, the interventions can be smaller, whereas in the traditional setting, the interventions have to be big enough to justify the expense of doing the experiment. Which is why I qualify it as “realtime policy micro-experimentation.”

What I would like to do in this presentation is to make the case for a policy analytics approach by describing emergent examples of this approach from around the world, asking you to consider whether this approach would work in your world. My objective is to try to convince you that there is great potential benefit to us investing in the data infrastructure, people skills, organizational and societal changes necessary to make these examples more plausible in the future, so that we can improve the policy function, make fewer big mistakes while making more small mistakes, and improving our societies.

Some examples of policy analytics

Example 1 - Dynamic pricing of access to HOV lanes in Los Angeles

Our first example comes from the policy field of transportation management. In Los Angeles, a brief experiment was run several years ago that used a combination of in situ sensors, car-based transponders, and digital signage to dynamically price access to high-occupancy vehicle (or HOV) lanes.

You are no doubt aware that Los Angeles has some of the most intense freeway traffic in the U.S., with all of the LA region's major freeways having segments that move at less than 10 mph during the most heavily traveled part of the morning and evening peak periods. Under these circumstances, access to the HOV lanes is coveted, but not so much that people will actually, you know, have two or more people in a car at once. Another way to access the HOV lanes in LA are by buying and registering an ultra-low-emission vehicle and displaying the appropriate state-issued decal.

Even with those criteria - either one extra passenger, or driving a ULEV, HOV lanes see lower demand and move at much faster speeds than regular lanes at peak periods.

You’ll recognize the ULEV approach as rationing access through pricing. You can technically buy your way into the HOV lane. This policy analytics experiment sought to ration access to the HOV lanes by putting an explicit price on that access, and dynamically adjusting the price in order to meet a specific policy objective.

Say your policy objective for HOV lanes on LA freeways is 45 miles per hour, and if speeds drop below this you want to raise the price such that people choose to leave the lane.

In a policy analysis approach, you will do what was done for say the 407 Express Toll Route in Toronto and you do an analysis that produces differentiated prices based on time of day. The cost to use that expressway rises during anticipated peak periods and lowers otherwise.

Again the LA experiment took a policy analytics approach. By using road sensors, cameras and car-based transponders, the speed in the HOV lanes was measured, and the price for access (other than the regular conditions) changed. These price changes were communicated to drivers using digital signage, and drivers could decide accordingly. So if you were faced with travelling at 10mph in the regular lanes, and the digital sign was enticing you to enter the HOV lane, you may opt for the HOV lane in exchange for paying some amount of money. If too many people were accepting what they saw as a bargain, and increased demand led to reduced speeds, the system would react by raising the price further to the point where demand for access to the lane dropped to the point where the policy objective - 45 mph - was met and maintained.

So what is the California Department of Transportation doing here - and I mean this in a way that a room full of economist would love? They are actually drawing the demand curve for access to the HOV lane during periods of high congestion. They are measuring willingness to pay for access to a fixed supply to identify the price at which the speed objective is achieved (where demand is neither greater than nor less than capacity).

Is anyone else doing this? Yes - Uber. Uber uses smartphones to communicate to drivers and passengers in order to change not just demand for services but also the supply of services. During periods of high demand (e.g., when it’s raining), they use what’s called surge pricing to provide an incentive to would-be drivers to get out on the road, and to encourage would-be passengers to consider delaying their trip for a bit.

There’s a great recent study that actually achieves that rare in-the-wild spotting of a demand curve and it looks like all the theory said it would - and also reveals that the greatest invention in marketing - 99 cents - still works. People's’ behaviour is remarkably different when surge pricing goes from 1.9 to 2.0, and above 2.0 really changes again.

  • (interesting little tidbit though: do you know what predicts whether a person will accept a surge factor of 2, 3 or even 4? The battery life on their phone. If you’re at the bar with friends, and Uber is surging 3.5 - have another drink. Unless your phone is about to die. Because if the only way you can get in that car is to have a working smartphone, and it’s at 3% battery life, it is what it is and you take that car no matter the price.)

Example 2 - On-demand public transportation

OK, next story, also from the world of transportation, let’s look at public transportation.

How do we run local bus transit services? Analyze where people are and where they want to go, create a bunch of routes, buy a bunch of buses and hire a bunch of people, and run those buses on a schedule (running on schedule is purely theoretical, of course). We have a fixed price to access those buses, pricing them in a way that balances fairness and willingness to pay with what it costs to run the system. And then we occasionally revise the schedule and routes based on rider feedback, public consultation, and perhaps other data like rider counts. In other words, traditional policy analysis.

By the way, I’ve lived in a lot of places, and Regina has one of the wackiest transit systems I’ve ever seen (I mean, I’ve lived in Ireland - and the bus system there is something to behold).

This is an actual route from the Regina transit system, which I think is called Heritage because it offers a semi-guided tour of the city’s heritage properties): I like the little bit on the left, which reminds me of that old saying: “two wrongs don’t make a right - but three lefts do”. I’m sorry if anyone here is from Regina transit, but I ride the bus here and I’ve given up being nice.

Let me briefly back up and ask: why do we have local public transit? Perhaps it’s to offer transportation services to those who can’t afford to drive or simply can’t drive, or to reduced costs for commuters. Maybe to take pressure off public infrastructure like roads and parking. Maybe there’s an environmental benefit like reduced air pollution, or public health benefits like safety and active transportation.

What if I told you (sorry, no Morpheus meme) that a city like Regina could have all those benefits without the cost and hassle of running the City of Regina Department of Transportation.

If you don’t believe me, soon enough someone from Uber or Lyft’s public transportation division, or Google’s Sidewalk Labs arm, are going to approach the City of Regina with an offer to take over the city’s public transit system by offering on-demand transit services that will respond to rider’s calls for transportation, provide a service standard that exceeds what currently exists and provide dynamic pricing below an agreed threshold.

So instead of a rider thinking: “I’m at point A, and need to figure out how to take the bus to get to point B”, will instead use an app to say “I’m at point A, and need to get to point B”, and the app will direct them to a pick-up spot nearby, promising to deliver them to their destination and provide a fare quote. A bus or shuttle or even a car will pick them up, dynamically routing to pick up other passengers along the way and deliver them at the promised time and the quoted fare. No pre-planned routes, no pre-set fares - but convenient and potentially cheaper than what we currently enjoy (again, at least below an agreed threshold). And with the medium-term arrival of driverless vehicles, the price will make traditional buses uncompetitive.

Already, in Toronto, Uber is offering similar transportation during times of the day that rivals the city’s transportation system that is faster, cheaper, and nicer.

Remind me again why we have a city-run bus transit system?

Example 3 - High Performance Teams

Let’s move from the societal level to the organizational level. What makes for high performing teams?

Research at MIT’s Human Dynamics Laboratory has identified the elusive element in group dynamics that characterize high-performing teams—those teams with energy, creativity, and shared commitment that surpass other teams. These dynamics are observable, quantifiable, and measurable - and, teams can be taught how to strengthen them.

In these studies, the researchers equipped all team members with electronic devices that collected data on their individual communication behavior — tone of voice, body language, who they talked to and how much. The devices capture more than 100 data points a minute and are enough that to not influence natural behaviour. The measurements found patterns of communication to be the most important predictor of a team’s success—as significant as factors like individual intelligence, personality, and skill combined.

Similar initiatives to measure student engagement and performance are being funded by the Gates Foundation.

A biometric bracelet named the ''Q Sensor'' - also referred to as an "engagement pedometer" - wrapped around the wrists of students would identify which classroom moments excite and interest them -- and which fall flat. This is a more intrusive approach than what Yassine and I are doing here today by tapping into wireless networks, but similar in objective. The biometric bracelets send a small current across the skin and then measure subtle changes in electrical charges as the sympathetic nervous system responds to stimuli.

The hope is that such sensors can become a common classroom tool, enabling teachers to see, in real time, which kids are tuned in and which are zoned out. That feedback can lead to revised curriculum deliver, a focus on particular students or change in tactic, or a signal that material needs to be reviewed.

No word on whether the teacher could send a small electrical impulse to help re-focus the zoned out student.

Example 4 - Smart Nudges, Tiny Acts

You’ll be familiar with behavioural economics and nudge theory, the idea that you can make subtle changes in design, architecture, incentives, information, norms, and constraints to have bigger impacts on behavioural choices.

We often think of those approaches in a traditional policy analysis way - identify the problem, propose a solution, try it and measure the effect. The classic example is opt-in vs. opt-out organ donation policies.

This graph shows the percentage of people, across different European countries, who are willing to donate their organs after they pass away.

Again, you probably know this story but it turns out that the design of the form when you renew your driver’s licence has a huge impact on choice. In countries where the form is set as “opt-in” (check this box if you want to participate in the organ donation program) people do not check the box and as a consequence they do not become a part of the program. In countries where the form is set as “opt-out” (check this box if you don’t want to participate in the organ donation program) people also do not check the box and are automatically enrolled in the program. In both cases large proportions of people simply adopt the default option - not because they’re lazy, but because it makes a hard choice easy.

The question is, can we use principles like nudge theory in a more dynamic way that takes advantage of the powerful devices ubiquitously moving around us to measure the environment, and individual behaviour and health conditions, and intervene by sending information to the individual in order to change a behaviour.

Many researchers are now investigating this, including my colleague at the Johnson Shoyama School Tarun Katapally who is investigating whether smartphone based feedback to participants can reduce sedentary behaviour (like sitting in a hotel conference room listening to a talk that has gone on too long) and promote healthy behaviours.

The power in these devices is that we can get realtime feedback as a response to a policy intervention, and calibrate those interventions to zero-in on our policy objective.

There are several other examples that we could consider. In financial services regulation (which should in future give regulators an opportunity to see things like the mortgage backed securities catastrophe earlier than they did), monetary policy (including new approaches to measuring the consumer price index and the velocity of money), national statistics collection (including whether we can have a big-data census without form-based reporting), education initiatives to pilot new approaches at small scale, health care management, natural hazards management, and simple mapping of phenomenon to see hotspots before they become problematic (though I do want to remind everyone that playing Pokemon Go, regardless of how you may feel about it, is not a crime).  

What I hope these examples have done is to persuade you that there is merit in at least considering, in some contexts, a revised approach to policy analysis that uses realtime feedback from the environment to determine whether our policy interventions are having the desired effect, and to begin thinking about those interventions in a smaller-scale, experimental way.

Oh, and by the way. We weren’t really tracking you use of your mobile devices. But I suspect you suspected that.

Thank you.