FOR A FULL VERSION OF THIS WHITEPAPER INCLUDING DIAGRAMS PLEASE EMAIL mindshifts@blueyonder.co.uk

Every effort has been taken to ensure the accuracy and completeness of information presented in this report. However, Mindshifts and Prior Intelligence cannot accept liability for the consequences of action taken based on the information provided.


THE TAO OF P2P

What is P2P? How does it work? Is it important? How do I make money out of it?

Table of contents

1. Setting the scene

2. Uses of P2P - Resource recycling Bandwidth sharing and hive computing

2.1 Collaborative working methods - File sharing, collaborative and collaborative hive computing

3. The importance of P2P

4. The application of P2P

4.1 Current problems, current solutions - Porivo case study

4.2 Current problems, future solutions - Sandia Labs research case study

5. Person-to-person

5.1 Next generation markets – Are we really all now bank managers/shop owners/producers?

6. Concluding


If someone was to ask the average computer user, one fairly conversant with the workings of Internet, what the term P2P meant to them the answer would undoubtedly contain the name Napster the term file sharing and the phrase copyright infringement. P2P has become fairly synonymous with the teenage revolution of file sharing most successfully carried out with the sharing of music files, but when it comes to the use of P2P within business much less is forthcoming. File sharing itself is merely one use of the P2P and not even it's most lucrative or useful.

Currently a few individuals are working to map out the potential of P2P and some have gone so far as to suggest possible uses for it beyond file sharing but to the uninitiated P2P is still a technological enigma. It is the direction of this white paper to gently introduce those unaware of the full potential of P2P and to put forward and answer the questions: What is it? How does it work? Is it important? How do I make money out of it?


1. Setting the scene

Prior to public interest being sparked by the well-publicised legal battles between Napster and the American Recording Industry P2P was quietly evolving alongside the Internet. The concept of hive and collaborative computing, one of the major uses of P2P, was outlined around the time of the development of the personal computer in the late 1970s and early 1980s at the now famous Palo Alto research facility. Its evolution has been slow but like the Internet it has been ongoing and looks set to continue to develop regardless of current or future booms and busts in the technology sector.

P2P is less a single concept and more a group of technologies, ideas, opportunities and solutions. The term itself can be described as peer-to-peer but equally person-to-person. The difference between these two descriptions is concerned with the technology of P2P (peer-to-peer) and the use, services and business models resulting from interaction between individuals through P2P technology (person-to-person).

The technology of peer-to-peer is fairly basic in principle and revolves around the concept of creating micro-networks using groups of peers. A peer might be a single computer user, small or large company network or large supercomputer. What the peer consists of is not important what is important is this peer is enabled to link with other peers. The relationship between these peers is also unimportant. Relationships might entail sharing a file or sharing a computer resource such as processing power, the strength lies in the ability to enlarge and contract the user controlled network in an organic manner while utilising the backbone of both the public and or private network. This description might oversimplify the concept of P2P somewhat but does give a framework from which to develop a more complex understanding of P2P. The basic concept of peer-to-peer is illustrated in fig 1 below.

Fig 1: Peer-to-peer Networks


2. Uses of P2P - Resource recycling Bandwidth sharing and hive computing

The concept of freeing up the processing, computational power and storage capabilities of computers, servers, networks or any other area where such resources exist, is a useful one to attain in a resource hungry corporate computing environment.

Bandwidth sharing and hive computing are two areas of P2P development. Both making use of the capabilities of under-utilised resources.

Bandwidth sharing uses the spare bandwidth available on remote computers within a network. This network can be enlarged or reduced dependent on the peers within it and dependent on demand. The network can be made up of closed corporate networks or the limitless potential of an Internet wide peer-to-peer network. An example of bandwidth sharing in practice is its use within distributed search engines designed to enable true real-time web searches. Rather than relying on Web spiders (automatic applications which visit sites, and build a centralised URL database) these applications use participating computers within a peer-to-peer network environment to search for sites in real-time making use of their spare computing and bandwidth power. The various peers involved each take one part of the search request and place a small demand in terms of bandwidth from their respective networks. Rather than one user demanding the information several are involved each at a different area of the peer-to-peer network. As a result the public network (Internet) has a reduced demand placed on it. The demand is broken out and requested through numerous 'pipes' rather than one pipe (i.e. the users) and the results are collated and delivered to the peer making the demand (the user). By making numerous small demands the process is speeded and handled more efficiently. Efficiency is further increased by the bandwidth utilised only being taken from idle computers within the peer-to-peer network or those not utilising all of their available bandwidth. Peers can be activated and de-activated as resource and demand dictates. As a relatively free resource bandwidth could be traded to those individuals who demand it e.g. streaming media companies, as and when required from bandwidth farms (recruited peers offering their free bandwidth capacity). Equally, large corporations could recycle their own under-utilised resources within their closed private networks.

In a similar manner peer-to-peer networks can be constructed that make use of the idle processing power found in desktop computers. The potential is immense with estimates of 10 billion MHz of processing power and 10,000 terabytes of under-utilised storage. The resource supply is relatively free as resource can be collected and utilised from computers at idle periods e.g. at night time or run in the background behind user applications. Large-scale computational problems such as analysing financial data and running billing cycles, would be the primary targets for such a solution. Intel states that an average large business has two and a-half-times the computing power available in its aggregate desktops than is accessible from all of its servers. On average companies will spend between 8 - 25%, dependent on type of business, of their out-goings on IT hardware considering this and the current economic climate not recycling this wasted resource seems ludicrous.

2.1 Collaborative working methods - File sharing, collaborative and collaborative hive computing

An integral aspect of peer-to-peer networks is the inter-communication between the peers involved within the network, the person-to-person relationship. The result of this linked communication method allows collaborative working methods to be considered through peer-to-peer networks.

File sharing is perhaps the most obvious example of a collaborative relationship as it allows users within a peer-to-peer network with controlled access or otherwise, to provide content off they're C drive (internal hard disk). User might be anonymous, content being de-centralised and freely available to all, or controlled, for example through a corporate registration process allowing corporate peer-to-peer network users to access each others computers for files. The de-regulated model is the file-sharing model preferred by the original Napster and its clones.

Similar in nature but developing the concept further is collaborative computing. This model uses the communicative nature of peer-to-peer networks to enable specific projects to be worked on collaboratively. Real-time communication of data between parties, instant messaging and online presentations are all examples of this process in action. The Napster of this model is Aimster which bundles instant messaging with content serving to enable users to set up collaborative networks to chat and share files. The key to this model is the ability for a group project to be worked on by a group irrespective of their geographic positions while each member of the group can view and make changes. Further, complex projects that might be difficult to work on by a geographically remote team requiring significant system resources, for example engineering projects involving CAD and modelling, can be provisioned for by utilising the capabilities of peer-to-peer resource sharing concurrently with peer-to-peer collaborative working.

Taking some attributes of both collaborative and resource recycling collaborative hive computing takes extremely complex problems which demand a large degree of system and bandwidth resource and enables them to be solved through a peer-to-peer model. These types of problems might include complex analytical activities such as germ virus modelling or financial forecast modelling but equally could involve any large problem that would stretch the resources of a single system or network. The hive peer-to-peer network breaks a large-scale computational problem into smaller sections requiring calculation that it sends out to various remote consoles or groups of consoles (hive peers). Each section of the problem is dealt with independently then sent back to the referring console (peer) to re-construct as a completed solution. The recycling of resources comes in respect that each hive peer is generally only activated when idle. A screen saver lets the hive peer's original user know that their computer is being utilised. When they return to use their system the results compiled thus far are forwarded to another idle hive peer the hive peer-to-peer network identifies. Other model variants involve the computational problem being solved in the background to any applications the hive peer user might be making use of. This variation relies on the fact that most office applications require the minimum capabilities available from modern desktop computers, which are over resourced. Use of under-utilised storage space on hard disks is also used where data does not need to be retrieved on a regular basis or in mission-critical time-scale. Aggregate memory space is immense as on average corporate desktops utilise the minimum of space for MS Office applications. Restricted data can be protected by cutting off access to certain parts of the hard disk being utilised by the hive peer-to-peer user. The basic model of hive computing is illustrated in figure 2 below.

Fig 2: Hive Computing


3. The importance of P2P

The uses of P2P have been alluded to thus far but the very real importance and impact of this technology need further illumination.

An indicator of the very real importance attached to P2P is shown in the position it holds within Microsoft's future business model, which utilises its .NET technology, a means, Microsoft envisions, to run personal networks within customer environments. The .NET technology allows every part of a user's environment to be controlled by the user. To take an example, a person could run an entire home network, utilising entertainment devices (iTV, music systems, audio-visual, games consoles etc) household devices (heating, water etc) and devices within the home like fridge's and cookers. In essence the household network becomes a single micro-network which connects to a wider macro-network when demand dictates e.g. the fridge might communicate on the macro-network by connecting to another micro-network situated in a grocery company when its CPU detects the need for milk. This model bears more than a striking resemblance to the basic peer-to-peer model. The implication of such a shift in business thinking is even more pronounced when Microsoft highlights it business philosophy. The next generation of Microsoft products is envisioned moving away from the desktop and applications that run on it, the original Microsoft business strength, toward an environment of networks that pervades all areas of life not just those areas we currently use computers within. Windows XP is Microsoft's first step down this path incorporating a full suite of inter-operating applications, solutions and plug-and-play compatibility with external hardware.

There are currently numerous companies investigating the potential of P2P but Microsoft's explicit support of the technological concept is perhaps the most telling and acts as a signpost of what might be expected in the future.


4. The application of P2P

Like many new concepts it is sometimes difficult to see exactly how the said concept might be used and what path it will take as it evolves. P2P can be a seen as a radical technology but equally a simple solution to many of the problems which face the business world and the individual wading through the information technology quagmire. In the short term P2P will develop along those lines that it has first been noted for, i.e. file sharing, content distribution and as an information filtering device. In the mid- to-long-term P2P has an even brighter future as solution for next generation security/hacker and virus protection and the mechanism for future transactions of a business nature in a web enabled business environment. But what exactly are these applications?

An obvious use for content serving (file sharing) is the company wide Intranet. One of the key problem's company-wide Intranets face is the need for centralised control. The layers of control and bureaucracy a centralised procedure creates can complicate getting information onto the Intranet, counteracting the advantage of the system in the first place. A Napster-like file sharing approach, utilising a centralised index with distributed redundant content, could remove obstructions in the delivery of company-wide communication, best practice information and or competitive intelligence. In the commercial realm file-sharing of both entertainment services and indeed information resources such as data banks for online libraries could be easily enabled. It might be feasible to conceive of a world-wide inter-linked peer-to-peer network that contained every book ever written as well as all those books currently being produced, writers might give access to their internal drive on their desktops and receive feedback or capital through subscription by allowing access. Those wishing to read the book prior to publication could literally do this as the book was being written.

With bandwidth sharing the obvious customer is the business requiring sporadic and significant amounts of bandwidth such as media streaming companies. The bandwidth available from users not taking full advantage of their bandwidth could be harvested by such companies as an when needed without the need for investment in fixed high capacity bandwidth delivery mechanisms. A secondary market developing on the back of this demand is that of agencies employed to recruit those willing to rent their spare capacity. The viability of this model is further strengthened by nascent developing technologies such as virtual reality, nano-modelling (modelling for nano-technology), real-time sensation stimulation. Although seemingly in the realm of science fiction these services are in the early stages of development and would be delivered through the network of the future. In contrast to those who predict a 'bandwidth glut', provisioning bandwidth for such services would produce a great drain on bandwidth resources and would prove difficult to predict level of usage required for.

The computational improvement's P2P resource re-cycling enables are wholly evident and there are already several large corporations making use of them. Intel has saved half a billion dollars on chip design since 1990, by using peer-to-peer hive computing methods to simulate and validate chip designs while JP Morgan has made use of its spare computational resources for data crunching. This data crunching has also been offered to Wall Street brokers who must make decisions within very short time-scales. These decisions demand analysis of risk assessments and market conditions of immense complexity and analysis can reach tens of hours of processing time on a normal system. Due to time constraints these decisions must be made on limited data and acquired broker experience, as well as a good deal of luck. P2P resource re-cycling allows the linking of under-utilised computers within these companies to solve these financial conundrums. The result of using these techniques is to reduce the time analysis takes from hours to minutes while the data can be run at night to further increase the use of what would normally be 'dead-time'. When one considers that many of the largest corporations have sites throughout the world this capability becomes ever more powerful as computational jobs could be passed around the various sites when the office networks where not being used.
The degree of processing power P2P applications release has another use in the filtering of the vast amount of data that we currently have presented to us through the Internet and other information delivery mechanisms. P2P solutions have tackled this problem and are now incorporated into Internet search engines utilising various ingenious mechanisms enabled by P2P. These solutions are based around the use of the each peer as a micro-analyser within a larger macro network. Information can be filtered and analysed much faster in this way as it is broken up into smaller segments then re-made as a complete solution, working in much the same way as the human brain. Developments in this area have seen Internet search engine companies using techniques that return results on searches based on individual users preference and relevance to them. This is achieved by using recruited peers within an Internet search engine as referrers. These referrers are peers within the search engine peer-to-peer network who allow access to their Web histories and bookmarks. These resources are scrutinised each time a search is made through the search engine. The theory follows that like traditional academic citation analysis (how normal search engines work) the most valuable resources will have the most hits in the referrers histories and bookmarks. Over time frequently selected referrers would move to the top of the search engine users group while others would drop out or place lower in their group. Throughout this process the search engine user builds up a group of referrers similar to themselves, in terms of the type of Websites they visit and as a result the relevance of the user searches increases.

These applications of P2P are merely a glance at the possibly potential of the technology and although some of these deployments may seem fair fetched each is a current method employed by both established and start-up businesses operating presently or likely to be operating in the next few years. By way of example I will now discuss two uses of P2P in a real business environment. The two companies selected demonstrate a use of P2P, one currently operating and one likely to be released in the near future. Both tackle a current problem and offer a powerful and cost effective solution, two of the key attributes of P2P.

4.1 Current problems, current solutions - Porivo case study

One area that peer-to-peer can aid is the problems facing next generation service providers delivering services through IP networks. These problems are involved with the complexity of billing for content. Content and the Content Economy are explained in further detail later in this piece but current attempts to charge for content services have looked at the need to deliver a standard set level of service. This requirement for Quality of Service (QoS) billing for content based services is concerned with the many of the new services delivered through IP networks where quality of service delivery (level of bytes delivered, level of bandwidth and delivery within time-scales) are important. Examples of services that would demand such QoS include video streaming, audio streaming, online gaming and so forth. Other services might demand set times for delivery e.g. time dependent financial data delivered to financial trading exchanges. All these services would need to be rated (gauged) by QoS.

An obvious problem in gauging QoS is ascertaining exactly what sort of service a user has received. Performance can be monitored, and various billing vendors offer this capability within their IP solutions, but the level of service delivered between the network and the end-user is a blind spot where performance cannot be monitored. The last mile, or the direct connection from the network to the user, is an unknown entity although in order to bill correctly for QoS this access to the last mile is essential, being the ‘real' user experience. Variables which complicate this include the type of delivery network the user has for the last mile, e.g. copper, fibre broadband, DSL, wireless and geographic position (distance from the server, content originator and or caching facility).

One solution peer-to-peer networks offer is the ability the peer (user) has to test the service direct from their PC, which offers a direct experience of any service delivery. Porivo is one peer-to-peer company that offers a service similar to this capability to those companies wishing to bill for QoS. In essence Porivo allows clients to test end user's level of service from web sites in numerous geographical areas and through various Internet connection types utilising peer-to-peer technology. The solution is a variation of collaborative peer-to-peer utilising agents (peers) that feed back data to a controlling agent (editor).

Porivo offers its service as a means for website owners and ISPs to check how their site is performing. Variables gauged include speed of page updates and general performance of the site. The company states that traditional web application testing falls short in its inability to monitor the last mile. In essence Porivo's application offers a solution that overcomes this ‘blind spot’ that monitoring technologies which sit behind the firewall or on servers located in datacentres, confront in last mile monitoring. Although Porivo's solution is not used within IP billing solutions it could easily be adapted to it.
Porivo’s overcomes the problem of recruiting peers by using a ‘test bed’ of employed users who download their application onto their computer. Selected users are incentivised with PayPal credits (an online currency) claimed through online purchases or forwarded direct to the user's bank accounts. A leader-board ranking systems based on time spent online, and processing time utilised is also used as an incentive.

The application runs behind any applications running on the recruited user’s computer with peers selected against various criteria. These criteria include:

- Geographic position
- Internet connection (dialup, DSL, T1 etc)
- Dialup connection (14.4K, 28.8K, 56K and so forth)
- Online availability (time spent online and set at a minimum level)

A strong motivation for the use of a peer-to-peer solution is the cost effectiveness of such use. Using peer-to-peer as a means of QoS billing saves financial resource by using moderately free resource - recruited private peers. Porivo offers its solution with packages starting at $1000. In contrast Billing solutions capable of QoS can be considerably more expensive ranging in multiples of hundreds of thousands of dollars.

Peer-to-peer solutions within the billing environment also offer opportunities within Service Level Agreement (SLA) tracking and breaking up the complexity of billing cycles into component parts that can then be aggregated as a completed solution (billing run) using the concept of collaborative computing.

4.2 Current problems, future solutions - Sandia Labs research case study

A problem that plagues many contemporary computer users is the irritation of computer viruses. This problem is only likely to get worse with the percolation of the Internet into our lives and the proliferation of cyber-terrorist, Viruses, business espionage and spying, hackers and even rogue states intent on damaging international corporations.

Computer virus proliferation and spread on networks is very much like the effect of disease in human host in respect that viruses spread from cell to cell or computer to computer. In a peer-to-peer networks the similarity between the human body is made clearer with each peer acting like a cell while the network becomes the body. The virus spreads from cell-to-cell and peer-to-peer. If one conceives of the electronic virus threat in this way then a solution presents itself from the natural world - the white blood cell.

The threat of peer-to-peer for virus proliferation and access for hackers is a negative by-product of peer-to-peer networks as these connection points can act as a means of spreading viruses of giving access to networks for hackers. However, the technology can also be used as a solution to the some of the same problems. Sandia National Laboratories are currently working on intelligent software agents that behave in a similar way to white blood cells in the body by removing or blocking virus or hacking threats. The system was originally formulated to defend against long term probing of networks by hackers who attempt to identify computers which could be ‘compromised’ then attacked with linked computers. Primary threats for such attacks where identified as rogue governments and criminal groups. The viability of such an attack was recently brought to popular attention when Sony warned of the potential use of networked (linked) Play Station 2’s for military use in countries banned from importing high technology e.g. Iraq. Aside from console makers publicity stunts, the concept of using high powered games machines as processors in a networked environment is inherently possible, utilising the Play Stations in-built capability for network connection originally designed for group gaming. Sandia’s solution to this problem is what it terms ‘a multi-agent collective’. This collective could range from a small businesses LAN to a massive corporations global network or a public shopping web-groups web site. Sandia's peer-to-peer application reacts to ‘port scans’ (network/Internet addresses on the computer that allow entry into the network or desktop from the Internet or external networks which hackers scan and probe for activity and weaknesses). Each ‘agent’ (Sandia’s term for the application situated on each node and between each networked computer) communicates with the other agents in the supra-net (a network residing above the network made up from the agents) to inform of any irregularities such as continuos port scans. This multi-agent communication allows actions to be taken proactively rather than reactively e.g. waiting till someone incorporates information into a virus checker. By acting as proactive units the agents overcome one of the major problems currently facing security systems - the fact that the network user defends against virus attacks that are already known and protection agents software only recognises viruses that have already been identified. New developments are not provisioned for but reacted too while only specific virus patterns are recognised. New or unique developments in viruses are not. In this manner security is always one step behind the virus developer.

The Sandia protection agents react to attacks by turning off services, closing ports, going to alternative means of communication and tightening firewalls while the faintest probes, almost undetectable from system noise, can be identified as hackers probe and attempt to gain control of a weak link in a network. The agents rely on pattern-recognition systems, which enable detection of hidden programs within systems designed to activate at a later date, e.g. Trojan Horse viruses, and shut down computers before the virus can activate. The agents can also remove computers from the network which have been compromised or are being used as incoming request terminals, utilised in Denial of Service attacks (DoS), which bombard networks with repetitive requests. The system relies on a de-centralised system of authority with no one agent controlling another, in this way no single attack can bring down the whole network of agents. The application can also act as a probe to locate and gather intelligence on the operating system attacking a network and formulate a course of action resulting from it, such as bombarding the attacker with repetitive requests. Old agents are replaced with fresh agents periodically enabling hacked systems to be cleaned. This creation of new agents can be paralleled in the white blood cell with new cells being created as old ones die or weaken while fighting infection, electronic or otherwise. The analogies with artificial life do not stop here. Sandia describes its agent program structure as a “program genome”. These genomes are downloaded and create agents from scratch, which then connect with other agents becoming a member of the ‘security community’ while constantly evolving. The Concept is illustrated in fig 3 below.

Fig 3: Sandia virus agent architecture


Mindshifts 2002

The solution has been tested under ‘a real threat scenario’ withstanding a group of specialist crackers, known as Red Team, employed by Sandia for such trails withstanding their repeated attempts to breach the system over a two day period.

Although in lab development at this moment, commercial use of this application could have a huge potential as the next generation of virus and Internet attack defence solution. Currently the system is envisaged (speculatively) to be commercially available by 2003. The potential for this multi-agent application, and further generations of it, are great when one considers the future of the Internet and virus/hacker attacks.


5. Person-to-person

Running in parallel to the technology of peer-to-peer are the services made possibly by it. These services can be generically labelled as person-to-person services and are primarily based around the concept of the content economy.

The content economy, to the uninitiated, is the phrase used to describe the potential next generation business paradigm based around the commodity of content. This 'content' might be information, media, music and so on ad-infintum, the only consideration is that it is delivered within an online/network environment. What is truly unique about the content economy is the opportunity it gives to everyone to become a content provider. Advancements in home computing power, faster networks and IP technology allow any individual to become, for example, an MP3 Mozart. Differing from the 18th century Mozart the 21st century version can instantly produce, market and deliver his works to a global audience through the Internet and through person-to-person payment reap the financial benefits. The capabilities of the peer-to-peer network allow users to create their own 'virtual' or online marketplace. This concept is not new mirroring the market garden model of commerce which allowed small scale 'garden' farmers to sell produce at fairs held in most UK cities – Market Gardens. The unique point of these market gardens was produce was literally created in gardens or small-scale private farms yet individuals could come together into a form of professional marketplace to sell them a model now re-created in the online world.

5.1 Next generation markets – Are we really all now bank managers/shop owners/producers?

A recent advertising campaign extolling the virtues of the online environment for business claimed ‘we are all bank managers now’. The point of this comment was that the Internet empowers the individual to deliver create and ultimately better control their business environment but is this what has actually happened? The strength of the Internet to increase the speed of transactions and business processes and widen a market from a local to global environment in the click of a mouse have been impeded by the processes of the offline world. The offline world of business transactions (processes involved within all aspects of business but specifically within the financial environment and the vagaries of different markets, currencies and trade) do not and cannot, currently, move at the speeds promised by the Internet. In order to achieve this ideal then those lumbering commercial and financial processes of the off-line world would need to utilise the real-time capabilities of the Internet whilst removing as much of the backend office delays as possible by bringing them online. The lynchpin of this process is the interaction through the Internet between individuals - the person-to-person interaction. Person-to-person offers the possibility to develop the next generation of business transaction online between individuals with content as the produce delivered within a peer-to-peer network environment and paid for with virtual online currency - person-to-person currency. The improvements of the person-to-person transaction are numerous. One the one hand, by making all aspects of the transaction electronic the transaction is immediately accelerated. Revenue assurance is improved by the process occurring in real-time allowing the credit of individuals to be verified while electronic credit can only be enabled if the user has credit available. As the electronic currency system is connected directly with those involved in the transactions electronic banking systems this verification can take place instantly. As the service gains from online instant delivery, then why not the payment method? Money is simply a form of data, which in a wired banking system is already described as digital data. The viability of this model of commerce is undergoing development at this very moment with various online currencies scrambling for customers. The longevity of the concept and that of person-to-person looks set to grow ever stronger as its primary users are online youths, the next generation of business and transaction commerce enablers.


6. Concluding

Hopefully the questions I posed at the beginning of this white paper have been answered in this introductory look at P2P and the subtle distinctions between its uses as a technology and the services possible through this technology have been introduced.

As I stated at the beginning of this white paper this is a mere glance at the exciting opportunities and changes that P2P is will enable as well as the merest glimpse of the potential uses for it. Whatever the outcome some knowledge and understanding of P2P is demanded. Just as some knowledge of the Internet is required today knowledge of P2P will be required for the future due to the impact it will have on society just as its Internet parent did before it.

William Gibson, author of Neuromancer, observed that "The futures already arrived; it's just not evenly distributed yet." With P2P the distribution network has arrived.