Review on Rethinking the Internet Design by Kahn

The article by David Clark on rethinking the design of internet analyses the original design goals v/s the current scenario of internet. The internet was designed to be based on end-to-end system with intelligence in hosts which communicates and not rely on the underlying architecture of communication. Thus it’s responsibility of the users/system at end to check for validity of data/information, process it/ discard it. The intermediate systems facilitate minimum functionality. This obviously made internet most popular with simple system designs and minimum effort to attach an end node.
The internet was designed and was believed to be used by experts basically for scientific or military applications. But analyzing the current trends and parties involved in internet, this is not the case.
The original assumption of trusted end systems totally vanished
The internet is used now for real time streaming which was not considered in the original design goal
The role of ISP’s for providing the services and involvement of 3rd parties and
Less sophisticated users
The Internet is currently used by all age groups, which means its beneficiaries are not only scientific community. Thus it invites public interests in content. The government and managements need control over the message delivered. Also there are scenarios where uses want a proof of transaction and some other time they want to be anonymous. There are issues in how much one can trust a software or hardware, as applications can monitor user activities and hardware properties can track a user from any part of the world with out knowledge of the user. Also the use of internet for spreading unwanted messages or SPAM or making denial of a service makes end systems to be more and more sophisticated (means it’s more complicated day by day).
The possible ways are
Modify the end node to have settings to control the applications. For eg: adult content to be used by children or tracking the user transaction under law by modifying the browser design. But the monitoring of contents by government takes away the assumption of end-end approach.
We can add functionality to the core by using firewalls, traffic filters and NAT elements. Firewalls are used widely to protect an island of nodes with rest by filtering normally done in network layer. There are application layer filters also such as application proxies. The design of NAT boxes removed the fear of running out of public IPs. But it removed the original design principle that addressing is unchanged during end-end transmission
The operation of ISP can be modified to control the contents passed. But with encryption mechanisms, the 3rd party interests are not preserved.
We can label the content like “Adv” for advertisement messages, or metadata content in the web site.
End-End design can use anonimisers, content filters and content caches for improved performance and less overhead to the end user.
We can use trusted parties service for PKI and content analysis.
We can have non technical solutions such as imposing law and order with cyber laws by modifying the existing ones.
The bad side of internet is that we are getting any kind of data in cheapest possible way. From the experience as a user of internet we see that the number of spam messages and porn advertisements increases day by day. There are reports of more and more cyber crime and issues related to content delivered especially to spread terrorism. We can never trust either the system or the software we are using, the license agreement says “use it as it is and the vendors are not responsible” for the losses caused by using it and if it’s objectionable, it is under court of XXX country and YYY place. News papers describe how internet fraud made loss of property for many or about new culture of social networking and private communication mechanisms. These are in fact the wrong sides of the internet.
The good side is that we are getting any kind of data in cheapest possible way. Again, the internet made possible to exploit the information available all over the world. It made possible , the users to share and collaborate for any kind of work they need, in more efficient way than earlier methods. It made revolution in many areas like human genome project or SETI@home distributed computing models for solving unsolved problems of public interest. It’s now used by all the people irrespective of the issues in it. This shows the success of the internet and computerization in general.
The question is what do we lack or who will improve the situations? Is it the private billion dollar companies or military or government to decide the future of internet or the ways in which it should be used? To answer that, let’s compare it with existing technologies, how they addressed certain issues
With telephone or post, anybody can post any objectionable content in past. They also keep anonymity. People used encryption from long time back in communication medium. The magazines or TV channels deliver any kind of matter including objectionable material. Hardly have they showed any warning to the viewers that it’s meant for adults.
The way in which they were controlled is collective by parties involved. An individual who use TV or magazine is aware of the possible content and prevent it to reach a kid. The organizations both profitable and non profitable works together to address the control of matter delivered through media. The postal and telephone companies have monitoring facilities, which can be used for the law enforcing agencies.
What is not there in internet and what do these systems have? It is the collective control. Currently the internet uses software or hardware which does not have “auditable” property. Also its design is controlled not by non profitable/ law making organizations for public interest. It’s by the corporate with their own vision to make more profit make the standards. ISP’s decide to charge based on the service and not the size of data. It’s so funny like some one charge you a paper with 20 bucks if you use it for essay writing and 2 bucks for the same, if it’s used for covering a material. Instead ISP should look for quality of service like “24 hr uptime, minimum congestion, preservation of offered bandwidth etc. Just like the first postal office and last postal office stamps the messages, which are trustable, we need stamps made by the ISP to make sure that any data delivered is traceable with trust.
The other thing to consider is, who can be the trusted parties? And do we need trusted parties per country/jurisdiction or we can have a consortium
A short set of suggestions for better internet
Trustable Stamping to trace the communication by ISP
Open architecture of system and network auditable by the experts of public interest and non profitable organizations
Making cyber laws in national and international level and co operative efforts by all parties for controlling objectionable material
A person holding world’s most accurate gun is not called secure if he does not know what he holds. So Creating awareness to the users for effective use of the internet than trying to create so called “more secure” machines.
Quality based charging with cost effective service design according to the demand of the society.
Monitoring facilities at ISP level to track terrorism and public threat matters and filtering facilities if needed subjected to court of jurisdiction of the country in which it’s offered.
Good morning/afternoon/evening dear reader
For many years we have seen the growth of internet and its good and bad use. This article is not commenting on that, but on the paper "The Design Philosophy of the DARPA Internet Protocols" David D. Clark* SIGCOMM ‘88.
I did this as a part of my course work, but felt to make it available for others also. Feel free to comment on my review, so that I can improve it.

Review on "The Design Philosophy of the DARPA Internet Protocols"

DARPA started the project of ARPANET on 1966 for reliable communication between two geographically separated “hosts”. They came up with packet switching instead of circuit switching, which uses store and forward technique for reliability compared to dedicated circuit switching. In 1970’s it was aimed to use services remotely using Telecommunication Networks protocol (TELNET) which ran above a 3 layer ARPANET protocol.(HOST/IMP, HOST/HOST and ICP) . The architecture proposed in 1974 by Robert Kahn and Vinton G Cerf used the term TCP as a protocol which used to mask any heterogeneity present in the underlying network architecture. The TCP has to address fragmentation, reassembly, reliable delivery with retransmission, routing and multiplexing. Notion of Gateway is used to transmit data between heterogeneous networks and unique addressing scheme was proposed to address end host nodes. It does not contain the notion of IP or internet protocol.

The paper by Clark describes the incorporation of “IP” or internet protocol which is used for unreliable packet transmission through available route. The end to end addressing is done by IP. Fragmentation and reassembly and virtual end to end connection retained in TCP. The reason of split was TCP will handle reliable sequencing of streams while IP attempt to provide basic building block “packet” to incorporate various service offered by higher layers.

The assumption was that TCP/IP is not suitable for real time data transmission systems such as voice traffic, since network does not define maximum end to end delay. Also the cross network debugger was removed from TCP/IP since it is not feasible through the unreliable IP with heterogeneous networks. Thus it’s the application designer’s job to do the debugging.

The idea of fate sharing, describes how “best effort” algorithm works. The design says, IP or intermediate gateway should not have any information about connections made by upper layer. Thus, the fragmented packets are reassembled by TCP at end host to deliver to an application.

The TCP/IP protocol architecture was evolved from practical issues faced during its development in ARPANET and later NSFNET. Thus it avoided the overhead caused by the 7 layer architecture. Comparing the same with 7 layer OSI, the visible difference in session and presentation logic as a full stack of protocols does not exist. Instead session management and presentation logic is made part of application layer. Thus TCP/IP was optimized in design which proved to be successful over other competing technologies of that age such as SNA by IBM and X.25.

However the design had a set of assumptions which ignored the end to end security. It is highly unlikely for a military application framework ignored such an important issue. There was no thought of implementing security at application level, since the applications Telnet, SMTP POP or FTP which was applications designed at that time, never had any security mechanisms. Anyone can wiretap the information, do any damage, still can hide since route information is never preserved. According to the design, the TCP uses sequence numbers, which can be generated using API creating. The IP source routing method was another flaw which make an attacker to re route the traffic to his machine. The routing protocol does not check the authenticity of the route information received. The management services in TCP/IP were poor in design which made it practically difficult to manage huge networks.

If the TCP/IP had incorporated a kernel level security module which uses PKI(Public Key Infrastructure) which can be used if an application needs security, it might have been much better. PKI was proposed in 1976 and in 1977 RSA algorithm was proposed. This paper and TCP/IP development was in 1982-87 where there were enough methodologies available to at least propose a security layer.

The architecture could have standardization in the service offered such as ISP and gateway should check, the ip forwarded is of its own network, which could have saved much threats including DoS and DDoS. Instead, for faster growth with competition the TCP/IP did not do it. The design goals does not took the fact who could be future users, when they expected faster growth in this kind of communication technology. Thus even now the SMTP and FTP servers run with plain text mechanisms

--------------------------------------


Other References:

  1. CERF and Kahn : A Protocol for Packet network inter communication IEEE 1974

  2. S M Bellovin Security problems in TCP/IP protocol suite ACM 1989

  3. Davidson et al The ARPANET TELNET protocol IEEE 1974