interdisciplinary course is part of the thematic training of the Leuven
Arenberg Doctoral School Training Programme. The course is mainly aimed
at Ph.D. students from all disciplines (either from the KU Leuven or
from other universities), but also open to undergraduate students,
post-docs, people working in industry, or anyone else interested on the
In a series of lectures, the course will provide an overview of various
aspects of privacy from the technical, legal, and social science
perspectives. This year’s edition of the course will have a special
focus on privacy by design, web search services, and behavioral
In addition to the lectures, this year’s course will feature
interactive exercise sessions in which the participants will work in
groups. In these exercise sessions the participants will apply what
they learn in the lectures to a practical case study (web search
application). The participants will be asked to identify the
stakeholders and their requirements, define the functionality of the
system, select the technologies that would be implemented in the
design, and discuss the legal and societal aspects of the system. The
participation in the exercise sessions is required in order to obtain
the certificate of attendance to the course.
Arenberg Doctoral School
for their support!
- Wednesday, June 27, from 9:15 to 17:00 and from 18:30 to 20:30
- Thursday, June 28, from 9:00 to 17:15
- Friday, June 29, from 9:15 to 16:30
- The course is free of charge, but attendees are required to
register by sending an email to firstname.lastname@example.org
- The course will provide coffee breaks for the participants.
Lunches are not provided. A number of restaurants are in the vicinity
of the course venue.
- The registration
deadline is: Tuesday, June 20, 2012
- There is a limit of 40 participants for the exercise sessions! As of May 15, the limit is nearly reached. No limits apply for the attendance to the lectures or the panel.
Wed June 27
09:15 - 09:30 Welcome coffee|
09:30 - 10:15 Lecture 1: Introduction (Claudia Diaz)
10:15 - 11:15 Lecture 2: Addressing Surveillance
and Privacy during Requirements Engineering:
The challenge of search and behavioral advertising (Seda
11:15 - 11:40 Coffee break
11:40 - 12:30 Explanation of the practical
exercise (Seda Gürses)
12:30 - 14:00 Lunch break
14:00 - 15:15 Exercise session 1
15:15 - 15:40 Coffee break
15:40 - 17:00 Exercise session 2
18:30 - 20:30
Panel on privacy in search engines (open to the public) with Joss Wright, Andrew McStay, and Joris Van Hoboken
The panel will be followed by a reception sponsored by LSEC.
Thu June 28
09:00 - 09:15 Welcome coffee
09:15 - 10:15 Lecture 3: Web mining and privacy:
threats, opportunities, and search issues (Bettina Berendt)
10:15 - 11:15 Lecture 4: Social perspective on
(dis)empowerment of users in an internet environment (Jo Pierson)
11:15 - 11:35 Coffee break
11:35 - 12:35 Lecture 5: Technologies for private
search (Claudia Diaz)
12:35 - 14:00 Lunch break
14:00 - 15:00 Lecture 6:
Search engines and data protection - Google under the microscope of European regulatory authorities (Eleni Kosta)
15:15 - 15:35 Coffee break
15:35 - 17:15 Exercise session 3
Fri June 29
09:15 - 09:30 Welcome coffee
9:30 - 11:00 Exercise session 4
11:00 - 11:20 Coffee break
11:20 - 12:30 Exercise session 5: preparation of
12:30 - 14:00 Lunch break
14:00 - 16:30 Presentations of results of the
exercise and discussion
Panel on privacy in search engines (open
to the public)
Abstract (presentation slides)
My panel presentation raises questions about the nature of consent in the
context of behavioural advertising and third-party cookies, particularly in
relation to the full roll out of the European Cookie Directive
(2009/136/EC). In exploring tensions between the Article 29 Working Party
and the UK's understanding of consent, the presentation highlights paradoxes
in the UK's implementation along with lessons that might be learnt from
moral philosophy on the meaning of consent in relation to privacy and
positive values of autonomy.
Andrew McStay is Lecturer at Bangor University, UK, and author of Digital
Advertising (Palgrave-MacMillan, 2009) and The Mood of Information: A
Critique of Online Behavioural Advertising (Continuum, 2011). Forthcoming
books are The Mode of Seduction: A Discourse on Creativity and Advertising
(Routledge, 2013) and Deconstructing Privacy (Peter Lang, 2014).
Joris van Hoboken
Abstract (presentation slides)
Web search engines have played a central role in many of the privacy
debates related to the Internet. Amongst the primary reasons for this
are the collection of enormous amounts of user data at the
individualized level, the indexation and digitization of more and more
information (including personal data) on the Web and elsewhere, the
ever-growing sophistication of techniques to query data sets as well as
the integration of search services into behavioral advertising networks.
In his talk, Joris van Hoboken will map the privacy issues that are at
stake in the context of search and address some of the most important
unresolved challenges in this field.
Dr. Joris van Hoboken is senior researcher at the Institute for
Information Law. His research addresses law and policy in the field of
digital media, electronic communications and the internet. His interests
include the implications of the fundamental right to freedom of
expression and privacy online as well as the transatlantic comparison of
different regulatory approaches to the online environment. He is a
specialist in the field of search engine law and regulation and
regularly writes, teaches and presents on the issues of data protection
and intermediary liability on the Internet.
Joris was awarded a Ph.D. by the University of Amsterdam (2012) for his
thesis examining the implications of the right to freedom of expression
for the legal governance of search engines. He graduated cum laude in
both Theoretical Mathematics (2002, M.Sc.) and Law (2006, LL.M.) and
serves as the chair of the Board of Directors of Bits of Freedom, a
Dutch digital civil rights organization.
Abstract (presentation slides)
Search has assumed a critical importance in modern networked societies.
The volume of information continuously generated and made available
through the internet has placed search in the role of gatekeeper to an
otherwise unmanageable flood of data. This priviliged position provides
deep insight into, and consequently extreme privacy risks for, the users
of the search services companies that manage search. This talk will
examine privacy in the context of search, the social implications of
user profiling in these services, and explore areas where tensions arise
between providers and citizens, and whether these tensions can be
resolved or entirely avoided.
Dr. Joss Wright is a Research Fellow at the Oxford Internet Institute,
University of Oxford. where his research focuses on the themes of
privacy enhancing technologies and online censorship, both in the design
and analysis of techniques and in their broader societal implications.
He obtained his PhD in Computer Science from the University of York in
2008, where his work focused on the design and analysis of anonymous
communication systems. Dr. Wright has provided advice to the European
Commission, as well as a number of EU research projects, on the social,
legal and ethical impacts of security technologies. He has been featured
in national and international media commenting on digital rights issues,
and has written articles on topics related to online privacy, social
media and internet censorship for, among others, the Guardian and
Observer newspapers in the UK.
Lecture 1: Introduction (by Claudia
Diaz) (presentation slides)
This lecture will motivate the need for privacy
protection, introduce the arguments in the privacy debate, and review
the main approaches to privacy. Some of the questions that we will
address in this talk include: Why is privacy important? Why is it so
complex? What are the different meanings of "privacy"? How does
"privacy" translate to technical properties and how do these relate to
classical security properties?
Lecture 2: Addressing
Surveillance and Privacy during Requirements Engineering: The challenge
of search and behavioral advertising (by Seda Gürses) (presentation slides)
Privacy is a debated notion with various
definitions that are also often vague. While this increases the
resilience of the privacy concept in social and legal context, it poses
a considerable challenge to defining the privacy problem and the
appropriate solutions to address those problems in a
system-to-be. Surveillance can be summed up as “any collection
and processing of personal data, whether identifiable or not, for the
purposes of influencing or managing those whose data have been
garnered” (Lyon, 2001). One of the main concerns with any type of
surveillance is social sorting, a form of classifying people based on
surveillance data that may lead to real effects on the life-chances of
people. In the context of web-based search, given its current
integration with targeted and behavioral advertisement, different
parties raise concerns with respect to privacy and surveillance. From
an engineering perspective this raises questions about whether and how
these matters can be addressed when engineering information systems?
Ideally, when engineering systems, the stakeholders of the system step
through a process of reconciling the relevant privacy and surveillance
definitions and the (technical) privacy solutions in the given social
context. We will explore methods to define and elicit concerns based on
different privacy and surveillance notions; summarize the desired steps
of a multilateral requirements analysis approach; and discuss how these
methods can be applied in the context of web based search and
Lyon, D. (2001). Surveillance society: Monitoring everyday life.
Buckingham, UK: Open University Press.
Lecture 3: Web mining and
privacy: threats, opportunities, and search issues (by Bettina Berendt) (presentation slides)
Web mining is the application of data mining
techniques on Web data such as queries and other records of usage,
social-network profiles and friend links, or news, blogs and tweets.
Data mining means finding new knowledge that was previously only
implicit in data. Web mining thus operates on many personal data that
keep growing in volume and interrelatedness, and it0leads to inferences
on inferences and groups that may be beneficial for some but
unwanted-to-pernicious for others.
In this lecture, I will first give an overview of mining techniques and
typical uses such as profiling. I will then describe methods that have
been proposed for protecting personal data from unwanted inferences
(privacy-preserving data mining) or for reducing the risks of releasing
these data (privacy-preserving data publishing). I will investigate the
roles in the mining process (who is doing the mining on whose data of
what sorts) and identify threats and opportunities in different
settings, with a special focus on queries and other data related to search.
Lecture 4: Social
perspective on (dis)empowerment of users in an internet environment (by
Jo Pierson) (presentation slides)
In a society where people increasing rely on search
engines and social media for communication and information sharing, it
is vital to investigate these new forms of mediated communication from
the social perspective of users/citizens/consumers. However in this
transitional digital media ecosystem we observe how people can become
simultaneously empowered as well as disempowered, in particularly on
the levels of identity, privacy and surveillance. How this works out
depends on the interrelationship between how internet systems are being
designed (i.e. what they enable) and what people within their social
context do with these systems (i.e. are able to do). In this way we
notice for example that users of search engines and social media are
foremost framed as consumers, and where 'relevance' is foremost posited
as 'commercial relevance'. Questions are therefore: How can governance
and power manifest itself through the algorithm? To what extent and how
are the social practices by citizens and communities following,
opposing and/or negotiating the 'governance' of internet systems? In
what ways is the social self increasingly being commodified, with
personal data becoming the new currency? In what way can a
socio-technological perspective offer solutions?
Lecture 5: Technologies
for private search (by Claudia Diaz) (presentation slides)
Search queries are closely related to the issues on
which we are interested. This raises privacy concerns, as potentially
sensitive information can be inferred from these queries, such as
income level, health issues, or political beliefs. In this talk we will
review different technologies for implementing private search services.
This includes cryptographic techniques such as private information
retrieval, as well as obfuscation-based private web search based on
automatically generating fake queries.
Search engines and data protection - Google under the microscope of European
regulatory authorities (by Eleni Kosta) (presentation slides)
Building legally compliant systems that process
personal information is turning into a nightmare for online business.
The quest for finding the balance between the privacy of the users on
the one hand, and the maximization of the profit of online business,
usually deriving from the processing of user information, on the other,
proves to be a difficult task. This lecture will present the
initiatives of the European Commission in the frame of the reform of
the European Data Protection Directive to achieve such a balance. The
case of search engines, who collect and process vast amounts of use
information is going to be used as an example.
The objective of the exercise sessions during the Interdisciplinary Privacy Course is for participants to have the opportunity to systematically think through a system where privacy is addressed from the beginning of systems development. The exercises should also provide the participants with the incentive to consider how they can employ the tools, strategies and accounts they learned in the lectures to information systems.
The exercises will be guided by a multilateral privacy requirements engineering methodology. Specifically, the participants will be asked to imagine a system from different stakeholder viewpoints, to mediate conflicting stakeholder interests, to document this process with the objective of eliciting basic (functional and privacy) requirements, and to consider ways of creatively applying legal and socio-technical solutions to fulfill those requirements.
All participants are asked to develop a new search engine of their liking. Their objective is to step through the basic development phases of such a system with privacy and security of the system in mind. The assignment is complicated through the needs of their different stakeholders, legal constraints and availability of privacy solutions given the targeted functionality of the search engine.
At the end of the exercise sessions, each group will be expected to provide a presentation and a written report on the analysis of the system that they discussed. The report should include:
In addition, the teams will be given 5-10 minutes at the end of every session to shortly reflect on the process: how did the discussion/process go? Did they start with a problem definition? Were there unsolved conflicts? Were there further obstacles, e.g., terminology, complexity etc.?
a stakeholder analysis
- a basic functional and information model analysis
- short analysis of a relevant legal framework
- a privacy analysis based on stakeholder interests
- threat analysis, adversary analysis, risk analysis of some sort
- application of legal/socio-technical solutions to a sub-problem they identify in their privacy/legal analysis. (which may require revising functionality)
Sessions and formats
There will be a total of 5 sessions:
- Team Building and Stakeholder Analysis
- Functional Requirements Elicitation
- Privacy and Security Analysis
- Privacy and Security Requirements Elicitation
- Documentation and Presentation
Exercise session 1:
Team building and stakeholder analysis
In this session the participants will first be asked to introduce themselves. Next, the participants identify the stakeholders and describe their interests and stakes in the system. This will include: their incentives, their interests, and the identification of potential conflicts between their interests.
Exercise session 2:Functional requirements elicitation
In this session the students will specify the
functionality, domain, and trust assumptions of the system. They also
construct an initial model of the information that is necessary to
fulfill the functionality of the system.
Exercise session 3: Privacy and security analysis
In this session the participants will identify the
legal frameworks that apply, describe the legal roles and
responsibilities of the stakeholders and their data protection
requirements, and discuss the societal implications of the system
linked to power relations between different stakeholders. They will
also conduct an analysis of the privacy concerns of the stakeholders
and the service integrity guarantees (i.e., threat and security
Exercise session 4: Privacy and security requirements elicitation
In this session the participants will further
refine the definition of privacy goals and provide suggestions for
privacy technologies that could be used in the system. The participants
are asked to apply some of the things they learned in the lectures to
the system they are developing. The specific choices of technical
solutions to be used in the system will require re-thinking of the
applicability of legal frameworks, the concrete functionality and the
Exercise session 5: Documentation and presentation
In this session the participants will consolidate
their conclusions and prepare the presentation for the rest of the
course participants that will take place in the last session of the