Skip to content

Latest commit

 

History

History
287 lines (184 loc) · 41.7 KB

tensions-related-to-identity-and-community-regulation.md

File metadata and controls

287 lines (184 loc) · 41.7 KB

##Tensions related to identity and community regulation ## White Paper by Matthew Schutte

####Summary:#### Identity serves a critical role in the regulation of any community. However, it is not a simple role. This paper attempts to describe the role that identity plays in enabling multiple actors, each operating from their own perspective, to rely upon one another to improve their own situational awareness and take action. It then explores the expectations and pressures that identity enables actors to place upon one another, as well as the way that those expectations and pressures can change in response to shifts in the external environment. It details the relationship between privacy and transparency in the identity realm and the impact that this has on the regulation of the community as a complex adaptive system more generally. Finally, it concludes with a series of proposals for specific technical solutions to address some of the tension balancing requirements that are necessary to enable communities to self-regulate amidst changing circumstances.

####How does identity work?#### Identity is not just about putting a label on an entity.

In part, this is because there are not clear boundaries in the physical world, let alone in the world of “platonic forms” -- abstract patterns and their descriptions. At any point in time, in order to understand context, and take action, any actor must operate from partial, and imperfect information.

There is too much detail available. As a consequence, paying attention to more detail than is helpful from a decision making and action taking perspective -- proves costly in a survival sense and gets disfavored by evolution -- those who suffer from “analysis paralysis” tend to go extinct. Nature choses “useful” over “true” any day of the week.

Actors that are able to take action on the basis of fairly accurate (albeit imperfect) information that derives from synthesis, or a heuristic based decision process become favored over actors that must handle all of the information in “the raw” -- not because they understand the environment more completely, but instead because they are able to “chunk information” into useful pieces and thus reduce the transaction costs of making a “wise enough” choice.

This “chunking” or synthesis of information is by its nature a “lossy” process. It results in a symbolic representation -- a name -- that treats a particular set of phenomena as if they were separate from the world around them, and ignores the ways in which their boundaries blend with their surrounding environment. In the naming of a thing, the flows between that entity and those around it of electromagnetic energy, chemical reactions and physical forces are -- for the moment -- abstracted away, and a phenomena that is dynamic and changing loses its “life-like” quality and becomes like a butterfly, pinned under glass -- a name assigned, and now easy to handle, but a partial and inaccurate description of what was really happening there.

Nonetheless, the assigning of a “name,” a handle that loses detail, enables the members of a community to communicate about that entity with such economy of effort, that the transaction cost savings more than make up for the loss of accuracy.

Furthermore, depending on the needs at any point in time, the details of the boundaries of that entity may be irrelevant or they may be critical. Where they are irrelevant, the “simple handle” enables actors to coordinate their actions fluidly. For instance, when I say “Please pass me the ketchup bottle,” you become quickly able to identify the bottle that says Heinz 57 and pass it to me. It does not matter that the bottle is half-full, or that it is slowly gaining heat energy from the surrounding environment as the afternoon wears on. It does not matter that it is but one of many ketchup bottles in the room. This was the one at our table. It was obviously the one that I was referring to. And now its contents are beginning to cover some of my french fries.

The name enabled me to quickly communicate information and -- more importantly -- delegate action taking to another member of the community -- you.

####Beyond Ketchup -- The Identity of Actors#### When it comes to actors -- humans, for example -- identity is a mechanism that fosters a balancing of two objectives that are always in tension within any community:

  1. The autonomy of an individual to determine which contexts she shows up in, which portion of her history (if any) she will provide for reference and how she will present herself more generally.

  2. The ability for other members of the community to track and share information about that particular individual so that each of them are better able to readily assess their own context, and navigate their own circumstances.

Together these represent the tension that exists between the autonomy of individuals and the functioning of a reputation system within any social context. My ability to control what you can find out about me impacts your ability to make decisions with regard to how you will interact with me. Similarly, your preference for accessing information about me impacts my my ability to exercise autonomy over my own life.

Both values are important. It is easy to see the value that comes from transparency. Access to information improves community members abilities to navigate their circumstances -- plus it results in stronger pressure to “behave well.’ However, the autonomy value -- which could also be described as a privacy value -- is actually critical as well. It is critical both for human reasons -- having no control over our own lives is not a desirable state of affairs regardless of the efficiency gains. However, it is also important from a community perspective as well. When individuals are able to exercise control over where they will show up, who they will share with and what they will discuss or work on -- they are able to create safe spaces within which they can experiment, collaborate, communicate and invest effort. Consequently, communities that have well functioning private spheres end up having a much broader set of ideas and ways of living being practiced within them. The result is a diversity of experience that can be drawn upon as circumstances change. The community with well functioning private spheres proves considerably more adaptable than the community that does not. In other words, privacy actually fosters communication and results in a more diverse, resilient and thriving society.

The balance between the two values we had outlined earlier, privacy and transparency, tends to shift fluidly within communities -- and at any point in time “emerges” from the preferences and priorities of the members of the community -- in the context that they face.

For instance, where community members perceive low risk, social norms will tend to demand low levels of transparency and vice versa. Christopher Allen’s white paper on Progressive Trust lays out a detailed example that draws in many of the tools that humans make use of for both making themselves increasingly vulnerable to one another and thereby strengthening their counterparty’s willingness to trust and interact with them.

Consequently, there is a couple of other factors that play a role in determining the balance of privacy and transparency:

  1. The context that each actor perceives themselves to be in,
  2. The social norms that that actor perceives to be appropriate to that particular context.
  3. The confidence that each actor has that their expectations will be upheld.

#####Context##### The assessment of context involves a variety of forms of information that contribute to that actor’s situational awareness. A more detailed articulation of how situational awareness is developed through direct experience, indirect experience and contemplation is explored in my paper “Identity, Guidance and Situational Awareness.” However, it is important to note that this assessment always takes place from the perspective of the individual themselves. It is their perception of the situation, and not the situation itself -- that will guide their decision making about how to behave.

#####Social Norms##### Social norms develop over time as a result of interactions amongst community members in contexts of similar form and pattern. Through trial and error, the members of a community learn what “works” and what doesn’t. And over time they come to expect ways of behaving that align with the methods that have proven useful in the past.

These expectations about behavior and appropriateness are usually informal and unwritten, but are nonetheless strongly held and almost always accompanied by some form of enforcement mechanism.

However, they are also context dependent. As contexts change, expectations about what forms of behavior will work well based on past experience may quickly become poorly suited to the new situation. Consequently, norms have to change as well.

In a world where “the only constant is change,” this means that the community that is best able to sense shifts in context and adjust expectations to suit the new reality will tend to thrive whereas those that are slow to adapt their expectations as circumstances transform will tend to stumble -- and potentially, die.

#####Confidence##### The confidence that social norms will be upheld is a direct result of past experiences with similar circumstances. Based on those experiences, actors will make predictions (consciously or unconsciously) about whether expectations will be met. For instance, in contexts where norms are regularly violated, we may feel vulnerable and will demand additional forms of security before we become willing to interact or rely upon the other party. That’s a fancy way of saying “once burned, twice shy.”

On the other hand, where enforcement mechanisms are effective, they tend to fade into the background, and users cease to even be conscious of their existence. To paraphrase the David Foster Wallace parable, participants become like the fish who do not realize they are in water.

Where enforcement is particularly effective, individuals “stay within the prescribed modes of behavior” not because they are worried about being punished -- they don’t even think about “drawing outside the lines.” Rather, belief comes to pull the bulk of the heavy lifting in a regulatory sense. But these same beliefs (mental heuristics, yet again) are able to persist in part because they are not undermined by frequent breakdowns in trust. In short, enforcement mechanisms that work well result in costs imposed on those that “push the boundaries.” As a result, people rarely push the boundaries, and the bulk of the community comes to “assume” that this harmonious state is “natural” rather than being aware of its relation to a functional, albeit rarely visible enforcement system.

However, though it seems invisible to most, it is still the enforcement mechanisms that enable trust to grow and that unlocks collaboration and coordination.

At the same time, the points that were made earlier about circumstances changing -- and old norms and expectations moving out of alignment with new conditions -- can make it so that enforcement mechanisms that are overly effective at preventing behaviors that stray from “historical best practices” can end up “shooting us in the foot” by keeping us repeating actions that no longer serve us well.

What a conundrum!

If enforcement mechanisms work poorly, trust breaks down and so does the ability to delegate, collaborate and extract benefit from membership within a community.

Yet if enforcement mechanisms work too well, they end up locking us into ways of behaving that can quickly become outdated and irrelevant and can again lead to a breakdown in collaboration as it ceases to yield benefits.

####So the question becomes: How can we balance effective enforcement of existing social norms with the capacity to adapt those norms as circumstances change?####

Let’s start by exploring how norms typically get enforced within communities. #####Two types of enforcement mechanisms: violence and access control##### Let’s start with an example: Nancy tells Ed a secret and makes it clear that he should not tell Ruby, a rival. She gets Ed to sign a non-disclosure agreement that covers the secret involved, but also goes beyond that saying “if you cross me, I will use every tool at my disposal and will go to the ends of the earth to hunt you down and break your legs.” Ed says he gets the hint.

In general, communities have two types of enforcement mechanisms that can be used to influence the behavior of individual actors:

  1. Violence
  2. Access Control (I apologies for the name sounding confusingly like ACL, but bear with me on this -- I’m not endorsing that particular security architecture here)

#####VIOLENCE##### I’ll describe violence as an attack on the body or a restraint of the body that is the subject of influence. Threats of violence actually do the bulk of the heavy lifting in violence based regulation -- rather than violence itself. Nonetheless, because those threats are backed by violence itself, I’ll make due with simply analyzing the way that actual violence operates. Violence based enforcement is the primary tool that underlies modern governmental systems of politicians, statutes, police, courts and prisons. The threat of imprisonment is the “force” part of the “enforcement for everything from parking tickets, to commercial leases, to minimum wage laws.

In our example, should Ed tell Ruby the secret, Nancy could pursue the matter in court. Ed’s violation of the non-disclosure clause would likely result in a judgment for damages or actual imprisonment. (If he refuses to pay the damages, he definitely risks imprisonment).
In violence based enforcement, the enforcing action “locates” on the body of the individual -- in our instance, Ed. To simplify things a bit -- Ed goes to jail. Similarly, Nancy could “break his legs” as she had promised. This fits a bit more in line with the tactics used by mafias and other extra-governmental organizations.

#####Violence is efficient##### As an enforcement tool -- violence is incredibly efficient. For violence to “work” on Ed requires only one actor (whether the nation state or one of Nancy’s goons) to imprison him or attack him. It does not require coordination from everyone else in the community. In fact, no one else needs to even be aware of what has happened. As a result, often communities coordinate their use of violence in the form of a “justice system.” Thanks to the nature of violence -- it only takes one action to enforce a rule -- norm selection has been formalized (laws) and enforcement “delegated” (arrest and imprisonment) to a “justice system.” This is not without its own risks. But it is a consequence of the fact that violence can be inflicted by an individual and does not require the involvement of other community members.

#####Violence is binary##### However, though violence is efficient, it is not much use in situations where community members disagree about which norms should apply. Because a violent action locates on Ed directly, it forces the community into a consensus with regard to actions taken. It is not possible to both put Ed in jail AND ALSO not put Ed in jail. In essence, violence does not allow for disagreement within the community to be expressed in the enforcement action itself.

That is not to say that people aren’t able to disagree with a decision. They are free to disagree. But their disagreement isn’t going to do Ed much good while he is rotting in jail or recovering from his broken legs. In short, violence forces a consensus concerning social norms -- at least at the point of enforcement. Something very particular will happen to Ed’s body -- whether that is imprisonment, broken legs, a slap on the wrist, or a determination that Ed’s offense is something worth celebrating rather than punishing.

This inability for disagreement to be expressed at the point of enforcement is problematic, particularly when a community reaches a size that includes multiple religious groups with vastly different worldviews -- and social norms. In such a situation, a regulatory system that does not permit for disagreement at these levels results in a political process that is “winner take all.” Many of the debates about which areas of life should be under government control and which should not stem largely from this characteristic of violence.

We are aware of this at an intuitive level. The fact that violence is such a binary tool -- he is either in jail or not in jail, executed or not executed -- is what leads us to care so deeply about government having a monopoly of violence (we don’t want local warlords running rampant). It is also a primary driver behind the push for democracy, due process and other institutional constructs to ensure that this powerful tool is used “for the benefit of the community.” Of course, which things “benefit the community” are themselves likely to be a matter of disagreement.

However, communities also have a separate tool that has been used since the dawn of humanity -- access control.

#####ACCESS CONTROL##### Nancy can also choose to turn away from Ed. She can refuse to ever do business with him again. In addition to that, she can tell everyone she knows about his betrayal -- and each of them may decide to turn away from Ed as well.

Access control is a result of the ability for individuals within a community to exercise autonomy over their own actions -- and consequently to turn toward those who behave in ways that they like or away from those who act in ways that they disapprove of.

#####Access Control admits for disagreement##### Unlike violence, where the enforcement action locates on Ed’s body, with access control, the enforcement action locates on the individuals that Ed might interact with and any resources under their control. It is these individuals and resources that Ed is being granted or denied access to.

Nancy refuses to hire Ed again. Ed loses access to that work opportunity and the pay and opportunities that could flow from it.

Nancy’s neighbor refuses to invite Ed to a party. He loses access to social opportunities.

Patricia learns of the betrayal and breaks up with Ed. He loses the companionship and intimacy of that relationship.

Later perhaps, Ruby hears that Ed is single and out of a job and approaches him with an offer to stay at her house for a while until he gets things figured out.

In each of these instances, the enforcement action is not locating “on Ed,” it is locating on the individual or resource that Ed may want to gain access to. Each of these individuals is making decisions about:

  1. What norms and behaviors they believe are appropriate/desirable
  2. How important that particular norm is relative to everything else that is going on.

Within an access control system, the various members of the community can disagree about not only “which norms are appropriate” they can disagree about “how high a priority each of those norms are.” One person can turn away from Ed for life. Another might wait to interact with him again until he has performed some sort of redemptive act. And a third, Ruby, might reward him for what he did.

As a consequence, there is a diversity of action able to be taken by the members of the community -- rather than only a single action as in a violence based system.

To be fair, access control based systems can be every bit as brutal and coercive as violence based systems. Someone who is cut off entirely from the community may starve to death or suffer some other terrible fate. However, access control systems allow communities to coexist amidst disagreement in ways that violence based systems simply do not enable.

Because of this fact, access control systems enable distributed decision making about social norms and their relative importance. As certain members of a community push particular boundaries and gain experience with new norms, they are able to discover different ways of behaving that might suit their present context better than established norms had. With that, the “new best practices” have a chance of spreading within the community. As a result, communities that rely upon access control structures to enforce social norms tend to be better at adapting to new circumstances.

#####Access control requires greater coordination##### However, significant coordination amongst the members of the community is required to have an access control-based enforcement regime work well enough to engender trust in any particular interaction. If Ed can betray Nancy, and can simply crash at Ruby’s house, and suffer no consequences -- individuals will regularly find that they can violate norms -- and as a result, confidence, and consequently collaboration will break down.

Because of this fact, access control based enforcement has historically worked well within small groups, but begins to perform more poorly as group size scales. On the other hand, violence based enforcement has historically worked well independent of group size. This is the primary reason why humans have shifted more and more of our regulatory burden onto violence based systems as our populations have exploded and trade has expanded.

Nonetheless, those violence systems simply cannot handle the diversity of worldviews present in our modern-day, global, interconnected world. The concept of a single global government to rule over everyone contains significant risk of becoming an oppressive tyrannical regime. Furthermore, violence based systems, because of their “singular action” nature prove far less adaptable than access control based regimes and this is a critical shortcoming.

#####Reputation##### As anyone from a small town will tell you, “in a small group, everyone knows everybody else’s business.” There is a feeling of constraint that exists. That sense of constraint is the result of a highly functional reputation system that keeps people “behaving” well. Gossip plays a major role in the information flow of a community like that. The consequence is that when a person is engaging in an interaction

  1. they generally know quite a bit about the other person they are interacting with and
  2. they know that if they “misbehave” in some way, word of that misbehavior is likely to spread to the rest of the community.

However, as group size increases, the amount of information to keep track of increases as well and thus each individual member of a community will -- by necessity -- maintain a mastery over a smaller and smaller proportion of all there is to know. Access control style enforcement requires a significant amount of work -- to acquire and interpret information about the other members of the community. That work increases as group size increases. Robin Dunbar number of 150 person groups has proven a useful starting point in thinking about the sizes of groups that humans can sustain without the advantage of institutional tools such as currency, written agreements, sovereign enforcement etc. Christopher Allen has written a series of wonderful posts exploring the efforts required to sustain trust within a community and the impact that the intent or purpose of the group has on the size that groups will stabilize around. He explores the point further in a paper exploring group size in online gaming communities:

This all leads me to hypothesize that the optimal size for active group members for creative and technical groups -- as opposed to exclusively survival-oriented groups, such as villages -- hovers somewhere between 25-80, but is best around 45-50. Anything more than this and the group has to spend too much time "grooming" to keep group cohesion, rather than focusing on why the people want to spend the effort on that group in the first place -- say to deliver a software product, learn a technology, promote a meme, or have fun playing a game. Anything less than this and you risk losing critical mass because you don't have requisite variety.

The short version is: the building and maintenance of trust takes work. And the amount of work to do increases as group size increases.

#####Ratings Systems##### However, simple tools are able to reduce the effort involved for larger groups to do access control based regulation. Reputation currencies that enable users to share information about one another -- and more importantly -- to synthesize that information down to some usable form -- enables individuals to distinguish between multiple options and take actions on the basis of that synthesis.

Examples abound of communities of interest that regulate themselves relatively well as a result of simple ratings systems: credit scores enable lenders to assess loan candidates for likelihood of repayment. eBay and amazon ratings enable purchasers to assess the likely accuracy, quality, timeliness and customer service of various merchants.

However, the ratings systems within these communities keep track of only a very narrow subset of information. They communicate that information well, and in communities like these, that information becomes the dominant “reputation currency.” In these narrow communities of interest, there is widespread agreement about what counts as “good behavior” -- and consequently, the simple ratings systems enable an access control based system to function, even at large scale. In these communities, “good” and “bad” behavior tends to be narrowly defined -- as they typically communicate only a very limited amount of the information relevant to each interaction -- typically along the dimensions deemed most important to that particular community.

Along the selected dimensions, when one person behaves badly, others are able to leave signals -- ratings -- that get communicated broadly throughout the community. The next time that person goes to interact, the person that they are interacting with can take into account their reputation and may decide to turn away from them. This potential “loss of access” is the primary regulatory mechanism in these communities of interest.

Some examples of the narrow dimensions that get communicated well in particular communities of interest:

  1. Likelihood of debt repayment in credit and lending communities.
  2. Accuracy, quality of goods, timeliness and customer services in transaction platforms like eBay and Amazon.
  3. Cleanliness, communication and observance of house rules for guests on Airbnb.

Within these narrow boundaries, enforcement mechanisms in these communities work very well. These ratings systems effectiveness stems from:

  1. The transaction cost savings that result from large amounts of real world experience from individual interactions,
  2. being translated into a composable metric (such as a rating)
  3. that can be synthesized with other such metrics (typically through an averaging function of some sort) and
  4. presented alongside alternative options to enable comparison.

The result is that when a person faced with a decision, the distillation of large volumes of human judgment -- along particular vectors (cleanliness, creditworthiness etc.) gives them guidance that improves their situational awareness and helps them make a “better informed” choice.

However, beyond those areas covered by ratings systems, there are not particularly strong reputation currencies present. Ratings systems are sometimes supplemented with text based feedback, and this has proven somewhat useful, but requires significant “interpretation” effort on the part of future users. This is in contrast with the very clear signaling that is available within those values covered by the synthesizing mechanism of a ratings system.

#####Privacy##### Information Privacy centers around an individual’s ability to exercise control over the distribution of information -- and consequently value and risk. Privacy is about being able to control with whom we want to share this insight. With whom we want to create this thing.

Privacy doesn´t have to be perfect, but if our expectations about how far something I say or do is going to go are consistently violated, I will stop saying and doing those things.

Privacy gives us a safe space within which we can take risks, communicate, collaborate, can try things, can innovate. I am not going to be able to invest a significant amount of time into pursuing some idea if I know that everyone else in the world can just see everything I am doing and as I am doing this, others are taking it. That actually gives us an incentive just to sit back and wait for others to innovate. That´s not a very functional system.

Privacy gives us the ability to experiment outside of the peering eyes of all the people around us. And when we look at the community as a whole, the result is that we end up with a much greater degree of diversity, because people were able to try new things. They were able to share different ideas, they were able to behave in ways that would otherwise prove troublesome if done within eyesight of their boss -- or in front of government agents.

If we have no privacy — if we live in a world where everything you do and everywhere you go is known by all, is connected to everything, is searchable,– I am not gonna show up at the union meeting, because I am going to be afraid of losing my job.

If we had that 150 years ago, Harriet Tubman doesn´t help slaves escape to the North.

The environment that we live within is not constant, it is changing. And the norms that were working in the past, might not work in the future. Privacy, because it results in a diversity of ideas, it gives us access to new -- even taboo ideas. And as circumstances change, we are able to draw upon a more diverse set of ideas and more readily adapt to a new world.

In this way, privacy solves two problems -- it serves as a critical component in giving individuals autonomy over their own lives -- something that is important for creating “a life worth living.” At the same time, privacy provides benefit to society as a whole -- in the form of the resulting diversity and the resilience benefits that accompany that. Consequently, communities tend to thrive where individuals are able to enjoy considerable control over who information will be distributed (information privacy).

#####Balancing privacy and transparency##### If access control style systems are dependent on some level of transparency to create sufficient coordination to function -- but privacy is necessary to ensure autonomy, diversity and resilience -- the question becomes: “How can we find the appropriate balance?”

Of course, there is no single “balance” that is right for all situations. Related to the earlier discussion of enforcement mechanisms more generally, where there is too much transparency, enforcement of existing norms will be strong, but may undermine the evolution of new norms. Where there is too little transparency, coordination and collaboration will decrease due to an inability to adequately assess the risks involved in each interaction.

And similar to the earlier discussion of enforcement mechanisms -- social norms will emerge that will strike a balance between privacy and transparency that “works well enough” for the community that makes use of them.

Our job as identity systems architects is to design the tools that enable communities to “figure out these balances for themselves” rather than trying to determine it for them.

#####The magically useful prioritization tool: pain##### Pain is often seen as only a problem, and not an asset. Yet pain is one of the most important tools for enabling individuals to set priorities and for communities to self-regulate. Pain is quite literally our ability to “sense” the world around us.

That is not to say that pain is an unmitigated benefit -- or that it is always of use. A little bit of pain can help us decide where to put our efforts. Too much pain may leave us crippled and incapable of action. Even here there is a balance.

The balances between privacy and transparency will emerge -- as a consequence of the experiences of individuals within the community. If a user gets “burned” after relying upon a particular person, or guidance tool -- the pain that they experience will create an incentive for them to seek out an alternative.

The balance that is struck doesn’t need to be perfect in order to be functional. However, if individuals don’t have the ability to take action and change the balance -- by demanding greater transparency, or alternatively better protection of privacy -- in response to their experience or anticipation of pain -- the community will lose the ability to self-regulate.

The path will never be perfect. But that is ok. Course correction -- not perfection -- is the goal.

####What should we aim for?#### And so we come to the thorny truths about both the promise, as well as the foreseeable limits of what a next-generation web of trust system might be able to accomplish.

There is much value that our efforts can bring to the world -- but perfection is not within our grasp. This is in part because the relative perfection or imperfection of any one thing is always a result of the perspective of the observer. Because there are many perspectives, no one structure will be “perfect” for all of them.

As I see it, the goal worth aiming for -- as we attempt to design the next generation of the web of trust is as follows: Let us build information system tools that enable

  • access control style regulation at scale,
  • across many values,
  • amidst disagreement,
  • in a changing context.

By enabling the individuals within our communities to choose for themselves the appropriate trade-offs in each particular context, we stand the greatest chance of creating a world that will thrive -- regardless of what circumstances may come.

####A few technical outlines#### This paper is already way too tl;dr, but I’ll briefly outline a handful of issues and technical approaches related to the above stated goal:

#####Criteria Based Publication##### In order to give individuals control over what they want to share and with whom, it is critical that they be able to discriminate on the basis of arbitrary criteria, from sources of their choosing. In order for this to function well, there need to be forms of reputation signals available for them to rely upon. Several areas of work may prove fruitful here:

  • the trust signalling concepts being explored in TrustExchange,
  • the pluggable protocols and semantic parsing available in CEPTR,
  • the delegation and revocation benefits of capabilities security models
  • the transaction cost gains that can come from subscribing to someone else’s algorithm,
  • the resilience benefits that can come from creating derivative works on top of those algorithms
  • the emergence of social norms concern appropriate and inappropriate behaviors in each of those areas -- and the access control based tools for enabling individuals to guide their own behavior based on those preferences.

Here is a rough outline of what an interaction in a criteria based publication context might look like:

  • Alice has recorded a drum beat
  • She publishes a low-fi sample publicly, and a hi-fidelity full track available
    • Users with a profile that satisfies any of the following criteria can qualify to access the track for $1:
      • profile has at least 10 interactions in a peer production community -- with no complaints of piracy or license violations
      • profile has insufficient history or has had complaints previously, but these have been remedied to the satisfaction of the aggrieved party.
      • profile fails to satisfy the above criteria but the interaction is covered by an insurance policy
  • Alice includes a set of license terms on the track:
    • If you create a song or other derivative work that uses this work (the drum beat), you agree to pay ten-percent of revenues generated from that derivative work -- regardless of the revenue source.
  • Later Bob downloads the drum beat, puts it in a song and generates $5000 in revenue from streaming and downloads.
  • If Bob fails to pay Alice the $500, Alice can leave feedback on the profile that Bob used in that interaction.
  • The next time Bob goes to download a track -- or other resource -- from Carol, another publisher, depending on the criteria that Carol sets, Bob may not be able to gain access -- or may need to purchase insurance -- or otherwise offset the perceived risk to Carol’s satisfaction.

If Alice gets burned, and this causes her considerable pain, she will likely alter her criteria in ways that leave her better protected in the future. Alternatively, if she gets no demand for her tracks with that set of criteria, she may alter the terms to make them less demanding of the Bob’s of the world -- either in terms of more lenient access criteria -- or in terms of less “pricey” licensing terms.

In such a system, each of us will have access to the information that others -- operating in their own best interest -- are willing to share with us. This makes our access dependent, in part, on the value judgments and risk assessments being made by our fellows. In this way, each individual would have access to the information that is available publicly, as well as any information -- or synthesis of information -- that they are able to gain private access to.

#####Maintenance of pseudonymity##### In a criteria based publication system, there is a risk that should be avoided.

If we do not structure the interactions appropriately, someone in Alice’s position can monitor the activities of the users that are interacting with her -- and can “pierce the pseudonymity” of those users by tieing multiple pseudonyms together over time.

With regard to privacy and transparency, we previously mentioned two goals that lie in tension with one another:

  1. an individual’s desire to exercise some control over what they will share and with whom and
  2. the desire of others to gain enough information to be able to adequately understand their own context, so that they can make decisions that serve their own interests.

Here is a rough outline of how these two objectives can be balanced:

  • Alice publishes +a description of the resource +the criteria for accessing it and +license terms
  • Bob generates a one time use pseudonym (for all intents and purposes, this enables anonymous access), and uses that pseudonym to perform a lookup of the above items.
  • This enables Bob to acquire that information -- without having to use one of his existing pseudonyms to do so. If he were to inquire about the terms with one pseudonym and shortly thereafter presented a different pseudonym to gain access to that resource -- it could enable another actor -- whether Alice, or some third party that is able to observe those requests -- to “stitch together the two pseudonymous profiles.
  • The use of a one-time pseudonym for terms lookup enables Bob to avoid this particular “pseudonym stitching” problem.
  • In addition, when Bob selects a pseudonym to make use of in a particular interaction -- that does not necessarily mean that he is giving Alice access to all information about that pseudonym -- he may send a request to the “peer production union” to send her particular elements of his reputation there that are sufficient to satisfy Alice’s criteria. Other pieces of information at that “reputation provider” or that are stored with other “reputation providers” that he is able to exercise control over, would likely be hidden from Alice’s view.
  • However, when Alice receives the confirmation from the peer production union, she may also opt to look to other sources for additional signals concerning that particular profile.
  • The acquisition of the profile itself -- and not just a confirmation from the peer production union that Bob has satisfied her criteria -- is important because it is the ability for Alice to leave feedback related to that pseudonym in the future that creates the enforcement pressure that will help shape Bob’s behavior in his interaction with Alice.
  • At any point, the risk that Alice is taking -- is the loss of control over the asset that she is publishing (an any assets controlled by that asset).
  • At any point, the risk that Bob is taking -- is the loss of the accumulated reputation that has been accumulated in the pseudonym at risk -- as well as any bond, insurance or other guarantee that has been included in the interaction.

This sort of structure can enable users in an interaction to strike a balance of privacy and transparency that works for both parties and that can respond to new tools, new threats and new shifts in the external environment.

####Additional Questions#### Are there ways to make use of hierarchical keys or other methods for revealing greater and greater detail in a “progressive trust” style dance -- either related to a single pseudonym or to transitioning from one pseudonym to another while minimizing “accidental” tying of pseudonyms -- or limiting the publication of such tied pseudonyms to the intended recipients (an expectation that itself would be enforced by access control style considerations)?

Can these types of architectures be instantiated in an economical fashion with layers of encryption in a single name space such as a single blockchain?

####Resources#### Dunbar, R. I. M. (1992). "Neocortex size as a constraint on group size in primates". Journal of Human Evolution 22 (6): 469–493.doi:10.1016/0047-2484(92)90081-J