Download presentation
Presentation is loading. Please wait.
Published byVernon Collins Modified over 9 years ago
1
Incentive compatibility in data security Felix Ritchie, ONS (Richard Welpton, Secure Data Service)
2
Overview Research data centres Traditional perspectives A principal-agent problem? Behaviour re-modelling Evidence and impact
3
Research data centres Controlled facilities for access to sensitive data Enjoying a resurgence as ‘virtual’ RDCs –Exploit benefits of an RDC –Avoid physical access problems ‘People risk’ key to security
4
The traditional approach
5
Parameters of access NSI –Wants research –Hates risk –Sees security as essential Researcher –Wants research –Sees security as a necessary evil a classic principal-agent problem?
6
NSI perspective Be careful Be grateful
7
Researcher perspective Give me data Give me a break!
8
Objectives V NSI = U(risk-, Research+) – C(control+) V i (researcher i ) = U(research i +, control-) risk = R(control-, trust-) < R min Research = f(V i +)
9
A principal-agent problem? NSI: Trust= T(law fixed ) = T(training(law fixed ), law fixed ) Maximise research s.t. maximum risk Risk= Risk min Researcher: Control = Control fixed Maximise research
10
Dependencies research i ViVi trustcontrol ResearchRisk V NSI choice variables
11
Consequences: inefficiency? NSI –Little incentive to develop trust –Limited gains from training –Access controls focus on deliberate misuse Researcher –Access controls are a cost of research –No incentive to build trust
12
More objectives, more choices research i ViVi trust control ResearchRisk V NSI training effort
13
Intermission: What do we know?
14
Conversation pieces Researchers are malicious Researchers are untrustworthy Researchers are not security-conscious NSIs don’t care about research NSIs don’t understand research NSIs are excessively risk-averse ☒ ☑ ☒ ☒ ☑ ☑
15
Some evidence Deliberate misuse –Low credibility of legal penalties –Probability of detection more important –Driven by ease of use Researchers don’t see ‘harm’ Accidental misuse –Security seen as NSI’s responsibility Contact affects value
16
Developing true incentive compatibility
17
Incentive compatibility for RDCs Align aims of NSI & researcher –Agree level of risk –Agree level of controls –Agree value of research Design incentive mechanism for default –Minimal reward system –Significant punishments Bad economics?
18
Changing the message (1) behaviour of researchers Aim –researchers see risk to facility as risk to them Message –we’re all in this together –no surprises, no incongruities –we all make mistakes Outcome –shopping –fessing
19
Changing the message (2) behaviour of NSI Aim –positive engagement with researchers –realistic risk scenarios Message –research is a repeated game –researchers will engage if they know how –contact with researchers is of value per se –we all make mistakes Outcome –improved risk tolerance
20
Changing the message (3) clearing research output Aim –clearances reliably good & delivered speedily Message –we’re human & with finite resources/patience –you live with crude measures, but –you tell us when it’s important –we all make mistakes Outcome –few repeat offenders –high volume, quick response, wide range –user-input into rules
21
Changing the message (4) VML-SDS transition Aim –get VML users onto SDS with minimal fuss Message –we’re human & with finite resources/patience –don’t ask us to transfer data –unless it’s important Outcome –most users just transfer syntax –(mostly) good arguments for data transfer
22
Changing the message: summary we all know what we all want we all know each other’s concerns we’ve all agreed the way forward we are all open to suggestions we’re all human
23
IC in practice Cost –VML at full operation c.£150k p.a. –Secure Data Service c. £300k –Denmark, Sweden, NL €1m-€5m p.a. Failures –Some refusals to accept objectives –VML bookings –Limited knowledge/exploitation of research –Limited development of risk tolerance
24
Summary ‘Them and us’ model of data security is inefficient Punitive model of limited effectiveness Lack of information causes divergent preferences Possible to align preferences directly It works!
25
Felix Ritchie Microdata Analysis & User Support ONS
26
Objectives V NSI = U(risk-, Research+) – C(control+) V i (researcher i ) = U(risk-, research i +, control-) risk = R(control, trust) control = C(compliance, trust trust = T(training, compliance)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.