Friday, November 26, 2010

Why Did Broadband-ISDN Really Die?


There are positive reasons for the smooth uptake of IP, such as the easy availability of the TCP/IP stack as compared with competing proprietary data protocols, the relative simplicity of the basic Internet architecture, and the prior existence of enterprise multiprotocol routers that could be used directly as Internet routers.
Perhaps more important, though, were the problems with ATM. In the 1980s when ATM was being designed, the dominant usage mode was seen to be the multimedia successor to the phone call—human beings making videophone calls. As we saw above, interactive multimedia is the most challenging application for packet networks, requiring a complex infrastructure of signaling, terminal capability negotiation, and QoS-aware media transport. It was not until the mid-nineties that large ATM switches capable of supporting the required signaling and media adaptation came to market—too expensive and too late.

Even worse, the presumed videophone usage model for B-ISDN was highly connection-oriented, assuming relatively long holding times per call. So ATM was designed as a connection-oriented protocol with substantial call set-up and tear-down signaling required for every call to reserve resources and establish QoS across the network. This required per-call state to be held in each of the transit network switches. For comparison, millions of concurrent calls (sessions or flows) transit a modern Internet core router and that router knows nothing whatsoever about them.

It turned out that critical enabling technologies for the Internet, such as DNS, require brief, time-critical interactions for which a connection-oriented protocol is inappropriate. Even for connection-oriented applications such as file transfer, which use TCP to manage the connection, connection state is held only in the end systems, not in the network routers, which operate in a connectionless fashion. This has allowed the Internet to scale.

So in summary, ATM had too narrow a model for how end-systems would network, and backed the wrong connection-oriented solution that couldn’t scale. Because ATM was designed against a very sophisticated set of anticipated, predicted requirements, it was very complex, which led to equipment delays, expense, and difficulty in getting it to work. The world moved in a direction not anticipated by the framers of B-ISDN and it was stranded, and then discarded.

Monday, November 22, 2010

Architecture vs. Components

The Internet was put together by many people and organizations, loosely coupled through standard protocols developed by the IETF. Some of it works well, some Internet services are beta or worse. The world of the Internet is exploratory, incremental, and sometimes revolutionary and it’s an open environment where anyone can play and innovate. The libertarian ideology associated with the IETF theorizes this phenomenon. The IETF saw (and sees) itself as producing enabling technologies, not closed solutions. Each enabling technology—security protocols, signaling protocols, new transport protocols—is intended to open the door for new kinds of applications. To date, this is exactly what has occurred.
Add a note hereThe Internet model is disaggregated—the opposite of vertically integrated. Because the Internet is globally accessible and presents support for an ever-increasing set of protocols (equating to capabilities), anyone with a new service concept can write applications, distribute a free client (if a standard browser will not do), and attempt to secure a revenue stream. This creates a huge dilemma for carriers. In the Internet model, they are infrastructure providers, providing ubiquitous IP connectivity. In the classic tee-shirt slogan “IP over everything,” the carriers are meant to be the “everything.” But “everything” here is restricted to physical fiber and optical networking in the network core; copper, coax, and radio in the access network; plus an overlay of routing/forwarding and allied services such as DNS. When it comes to end-user services, whether ISP services such as e-mail and hosting; session services such as interactive multimedia, instant messaging, file transfer; or E-business services such as Amazon, eBay, e-Banking, there is no special role allocated for carriers—the Internet model says anyone can play.
Add a note hereThis thought is entirely alien to the carriers, who have long believed they were more in the services business than mere bit transporters. Carriers have always wanted to move “up the value chain” whether they were offering network-hosted value-added services or integrated solutions to their enterprise customers. As the carriers came to terms with the success of the Internet, and the collapse of Broadband ISDN, they attempted their own theorisation of the Internet. Not in the spirit of the libertarian open model of the IETF, but more akin to the vertically-integrated and closed models they were used to. They proposed to integrate
§  Add a note hereData and media transport
§  Add a note hereInteractive multimedia session management
§  Add a note hereComputer application support
Add a note hereinto one architecture where everything could be prespecified and would be guaranteed to work. And so arrived the successor to Broadband ISDN, the Next-Generation Network (NGN).
Add a note hereThe advantages of the NGN, as the carriers see it, include a well-integrated set of services that their customers will find easy to use, and a billing model that keeps their businesses alive. The disadvantage, as their critics see it, is the reappropriation of the Internet by carriers, followed by the fixing-in-concrete of a ten-year roadmap for the global Internet. The predictable consequence, they believe, will be the stifling of creativity and innovation, especially if the carriers use their NGN architecture anti-competitively, squashing third-party Service Providers, which is technically all too possible.
Add a note hereWe should be clear here: anyone offering an Internet service has to develop a service architecture. In the IETF’s view of the world, it is precisely the role of Service Providers to pick and choose from the IETF’s set of protocol components and to innovate architecturally. There is absolutely no reason why the carriers shouldn’t do their architecture on a grand scale through the NGN project if they wish. Critics may believe it’s overcomplicated, non-scalable, and ridiculously slow-to-market. If they are right, Service Providers with lighter-weight and nimbler service architectures will win in the marketplace, and the all-embracing NGN initiative will fail. “Let the market decide” is the right slogan, but the market must first of all exist, which means that the Internet’s open architecture must be preserved and not be closed down. Many carriers have significant market power and might be tempted to use it in order to preserve what they take to be their NGN lifeline against effective competition, so this is an issue for both customers and regulators. Thankfully, there are reasons to be hopeful as well as fearful.

Thursday, November 18, 2010

Multimedia Sessions on the Internet



Add a note hereLayering telephone-type functions onto the existing Internet architecture is a challenge. Some of the basics are just not there. For example, the Web uses names asymmetrically. There are a huge number of Web sites out there that can be accessed by anonymous users with browsers. Type in the URL, or use a search engine. Click and go. But the Web site doesn’t normally try to find you, and you lack a URL. The Public Switched Telephone Network (PSTN) by contrast names all its endpoints with telephone numbers. A telephone number is mapped to a device such as a mobile phone or a physical line for a fixed telephone. Various companies provide phone number directory services, and the phone itself provides a way to dial and to alert the called user by ringing. The basic Internet structure of routers and computer hosts provides little help in emulating this architecture. Somehow users need to register themselves with some kind of telephony directory on the Internet, and then there has to be some signaling mechanism that can look up the called party in that directory, and place the call. The IETF (Internet Engineering Task Force) has been developing a suitable signaling protocol (SIP—Session Initiation Protocol) since around 1999 and many VoIP companies are using it.

Add a note hereThe next problem is a phone equivalent. A PC can handle sophisticated audio and video, multi-way conferencing, and data sharing. A PC, however, cannot be easily carried in a small pocket. Lightweight and physically small portable IP hosts are likely to have only a subset of a PC’s multimedia capabilities and cannot know in advance the capabilities of the called party’s terminal—more problems for the signaling protocol. A further reason for the relative immaturity of interactive multimedia services is the lack of wide-coverage mobile networks and terminals that are optimized for IP and permit Internet access. The further diffusion of WiFi, WiMAX and possibly lower charges on 3G cellular networks will hopefully resolve this over the next few years.

Add a note hereCan the Internet, and IP networks in general, really be trusted to carry high-quality isochronous traffic (real-time interactive audio-video)? Whole books have been written on the topic (Crowcroft, Handley, and Wakeman 1999) and it remains contentious. My own view is as follows. In the access part of the network, where bandwidth is constrained and there are a relatively small number of flows, some of which may be high-bandwidth (e.g., movie downloads), some form of class of service prioritisation and call admission control will be necessary. In the network itself, traffic is already sufficiently aggregated so that statistical effects normalise the traffic load even at the carrier’s Provider Edge router. With proper traffic engineering, Quality of Service (QoS) is automatically assured and complex, expensive bandwidth management schemes are not required. As traffic continues to grow, this situation will get better, not worse due to the law of large numbers. Many carriers, implementing architectures such as IMS (IP Multimedia Subsystem), take a different view today and are busy specifying and implementing complex per session resource reservation schemes and bandwidth management functions, as they historically did in the PSTN. My belief is that by saddling themselves with needless cost and complexity that fails to scale, they will succeed only in securing for themselves a competitive disadvantage. This point applies regardless whether, for commercial reasons, the carriers introduce and rigidly enforce service classes on their networks or not—the services classes will inherently be aggregated and will not require per-flow bandwidth management in the core.

Add a note hereAfter establishing a high-quality multimedia session, the next issue of concern is how secure that call is likely to be. By default, phone calls have never been intrinsically secure as the ease of wiretaps (legal interception) demonstrates. Most people’s lack of concern about this is based upon the physical security of the phone company’s equipment, and the difficulties of hacking into it from dumb or closed end-systems like phones. One of the most striking characteristics of the Internet is that it permits open access in principle from any host to any other host. This means that security has to be explicitly layered onto a service. Most people are familiar with secure browser access to Web sites (HTTPS) using an embedded protocol in the browser and the Web server (SSL—Secure Sockets Layer) which happens entirely automatically from the point of view of a user. Deploying a symmetric security protocol (e.g., IPsec) between IP-phones for interactive multimedia has been more challenging, and arguably we are not quite there yet. IMS implements hop-by-hop encryption, partially to allow for lawful interception. Most VoIP today is not encrypted—again, Skype is a notable exception. As I observe, Skype looked for a while to be proof against third-party eavesdropping, but following the eBay acquisition, I would not bet on it now.


Sunday, November 14, 2010

The Internet as the Next-Generation Network

We already mentioned the many complex functions that need to be integrated to make a carrier network work. It’s like a highly-specialized car engine. So where was this function for the Internet? Who was doing it? In what is the central mystery of the Internet, no one was doing it. The basic Internet is unusable, because it does nothing but provide protocols to allow packetized bits to be transferred between hosts (i.e., computers). It is pure connectivity. However, pure global connectivity means that any connected computer application can be accessed by any other computer on the network. We have the beginnings of a global services platform.
Add a note hereHere are some of the things that were, and are, needed to bring global services into being, roughly in the order the problem came up, and was solved.

1.  Add a note hereConnecting to a service
Add a note hereHosts and gateways operate on IP addresses for routing purposes. It is problematic, however, to use IP addresses (and port numbers) as end-system service identifiers as well. Apart from the usability issues of having to deal with 64.233.160.4 as the name of a computer hosting a service, IP addresses can also be reassigned to hosts on a regular basis via DHCP or NAT, so lack stability. A way to map symbolic names, such as www.google.com, to an IP address is required. This was achieved by the global distributed directory infrastructure of the Domain Name System, DNS, also dating back to 1983.

2.  Add a note hereInteracting with a service
Add a note herePart of writing an application is to write the user interface. In the early years of computing, this was simply a command line interpreter into which the user typed cryptic codes if he or she could recall them. The introduction of graphical user interfaces in the late eighties made the user interface designer’s task considerably more complex but the result was intuitive and user-friendly. The introduction of HTML and the first Internet browsers in the early nineties created a standard client easily used to access arbitrary applications via HTTP across the Internet.

3.  Add a note hereConnecting to the Internet
Add a note hereResearch labs, businesses, and the military could connect to the Internet in the eighties. But there was little reason for most businesses or residences to connect until the Web brought content and a way to get at it. Initially the existing telephone network was (inefficiently) used for mass connection by the widespread availability of cheap modems. We should not forget the catalysing effects of cheap PCs with dial-up clients and built-in modems at this time. More recently DSL and cable modems have delivered a widely available high-speed data-centric access service.

4.  Add a note hereFinding new services
Add a note hereOnce the Web got going, search engines were developed to index and rank Web sites. This was the point where Altavista, Yahoo!, and later Google came to prominence.

5.  Add a note herePaying for services
Add a note hereThere is no billing infrastructure for the Internet, although there have been a number of attempts to support, for example, micro-payments. In the event, the existing credit card infrastructure was adapted by providers of services such as Amazon.com. More recently specialist Internet payment organizations such as PayPal have been widely used (96 million accounts at time of writing).

6.  Add a note hereSupporting application-application services
Add a note hereComputer applications also need to talk to other applications across the Internet. They do not use browsers. The framework of choice uses XML, and we saw detailed architectures from Microsoft, with .NET, and the Java community with Java EE and companion editions, mostly since 2000.

7.  Add a note hereInteractive multimedia services
Add a note hereInteractive multimedia was the hardest issue for the Internet. The reason is that supporting interactive multimedia is a systems problem, and a number of issues have to be simultaneously resolved, as we discuss next. So while for Broadband ISDN, voice/multimedia was the first problem, for the Internet, it has also been the last (or at least, the most recent) problem.

Wednesday, November 10, 2010

Why Broadband ISDN?

The carriers knew they had to packetize their networks, and that they needed a new architecture (B-ISDN) supported by a number of new protocols. Here are some of the functions carriers thought they wanted to support.
§  Add a note hereSet up a multimedia video-telephony call between two or more people (involves signaling).
§  Add a note hereCarry the call between two or more people (involves media transport).
§  Add a note herePermit access to music, TV programs and varied computer applications.
§  Add a note hereAllow computers to communicate efficiently at high speeds.
Add a note hereEach of the above functions would be a chargeable service, leading to billing the customer. Carriers also needed to provision, operate, assure, manage, and monitor their networks as per usual.
Add a note hereWhen carriers contemplate network transformation or modernisation, they like to huddle amongst themselves in the standards bodies to agree their target services and design a standardized architecture. The latter consists of a number of components, implemented using switches and servers, plus standardized message formats (protocols) to provide the intercommunication. It’s easy to see why this ends up as a monolithic and closed activity—it all has to be built by the vendors, slotted together and then work properly. Getting the architecture into service is usually a highly complex and multi-year activity, and the cost will have to be covered by more years of revenue-generating service. Supporters of the model have pointed to the scalability and reliability of modern networks, the accountability, which comes from centralized control, and the sheer functionality that can be put in place by organizations with access to large capital resources and internal expertise.
Add a note hereCritics point to the monopolistic tendencies of capital-intensive industries with increasing returns to scale, the resistance of carriers to innovation and the overall sluggishness and inflexibility of the sector. They note that circuit-switched networking began in the 1860s and that it had taken a further 130 years to automate dialling and digitise calls. By the time I was asking my question in Canada, B-ISDN had already been in gestation for around 15 years with no significant deployment.
Add a note hereThe Internet, of course, also took its time to get started. TCP/IP came into service in 1983 and by the late eighties research groups were using e-mail and remote log-in. The fusion of hypermedia and the Internet gave us Web browsers and Web servers in 1993–94 and launched the explosion in general Internet usage. By 1996 there was already a debate within the vendor and carrier community: was the future going to be IP and was the B-ISDN vision dead? It took a further ten years for the industry to completely take on board the affirmative response.
Add a note hereThe Internet always ran on carrier networks. More precisely, the basic model of the Internet comprised hosts (computers running an IP stack and owned by end users) and routers (sometimes called gateways) forwarding IP packets to their correct destinations. The routers could be operated by any organization (often maverick groups within carriers) and were interconnected using standard carrier leased lines. Almost all hosts connected to the routers by using dial-up modems at each end across switched telephone circuits. So from a carrier perspective, the Internet was simply people buying conventional transport and switched services—the specificity of the Internet was invisible. In truth, the Internet was beneath the radar of the B-ISDN project.

Friday, November 5, 2010

Automate If Possible

If you are a large company, automation is especially critical if you ever hope to keep accurate track of all of these approvals. Automation will also help you in providing better customer service because a workflow tool would automatically route requests to approvers and implementers without human intervention. This could save you hours or even days on the end-to-end processing of an access request. Also, by automating, you will have a centralized repository that can be used for a variety of reporting:
§  Add a note hereUsers can investigate their requests to determine status, without bothering someone on your team to assist them.
§  Add a note hereYou can run monthly reports of what was requested and approved or rejected for contribution to your audit repository. This is a big step in creating a self-service environment for the auditors because they can select sample users from the user reports you have posted, and then they can look up approvals from the workflow reports.
§  Add a note hereYou will also be able to better track frequency of request types and durations for service delivery, which will help you improve your customer service.
Add a note hereImplementing automation of approvals and therefore of user access requests is no small matter. This will be a fairly large and involved process. You will need to decide between building your tool in-house or buying a product that is available in the marketplace. This decision will depend on the size of your company, the number and complexity of your requests, and the amount of time you have to implement the new tool before it becomes an audit finding or before your customers kill you. It will also depend on the internal resources that you have available to do development work and what the prospect is for adequate ongoing support of the tool. Realistically, unless your company's core competency is software development and you want to get into the workflow market, you will be better off evaluating the products on the marketplace and purchasing one that suits your needs.
Add a note hereWhen it comes to workflow for user management, there are three broad classes of products from which you can choose:

1.  Add a note hereBuilt-in functionality in an identity and access management suite. Most of the large vendors provide a workflow component with their user provisioning product. The advantage of going with their built-in product is that it already may be included in the cost of the provisioning tool, and you will not need to deal with integration. The disadvantage is that many of the workflow tools that ship with user provisioning products are fairly limited in scope. Users will be able to request access and possibly hardware or software with that workflow, but nothing else. If you want to provide users with a single tool from which they can request anything they need, including telephone or cellular equipment, facilities services, and even technical support or supplies, you will want to forego the savings in integration in favor of a tool that will provide a better user experience.
2.  Add a note hereTechnical workflow tool. A number of products on the market are designed to be used as generic workflow products. They will support a variety of different kinds of workflows, from IT service requests to business interactions. The advantage is that they are highly robust and can handle even the most complex workflows, often graphically. The disadvantages are that you must build all of your workflows from scratch, possibly being offered a few templates and some guidance to assist you, and you would have an additional component in your environment to be integrated with the rest of your identity management solution.
3.  Add a note hereService catalog tool. A small number of products on the market are sold as service catalog tools. A service catalog is a listing of services that are typically provided by a particular business unit—in this case, IT. This line of tools, in addition to providing basic service catalogs in key IT areas out of the box, also tends to offer user-friendly Web interfaces and familiar shopping cart style applications. The advantages are that you may be able to build your services more quickly because you would not be starting from scratch, and you would provide a very friendly experience for your users, potentially eliminating or at least decreasing a variety of status inquiries and the possibility of mis-submitting requests. The disadvantages are that you still have the integration problem, and this line of tools has a somewhat more lightweight workflow capability. It may not be able to handle the most complex workflows in your environment.
Add a note hereUltimately, what you choose will depend on how customer focused you are, how far reaching you want the workflow tool to be, and how much you have to spend. Any one of the three solutions described here will provide you with the control and reporting you need to meet your audit requirements if you appropriately configure your new tool and accurately account for your critical applications and their approval requirements. Thus, the decision hinges on your other priorities and strategic vision.
Add a note hereIf the business is interested in implementing an enterprisewide workflow tool that can be used ubiquitously, go with the technical workflow tool. If you need to have a greater customer service focus, want to make things easy for your end users, and ubiquity is not a requirement, consider a service catalog product. If speed of implementation is a top priority, and you have a solution for providing a single user front end for your access request system and other IT requests or are not concerned with providing a single user front end, select an identity and access management suite with a strong workflow component and use that directly.
Add a note hereRegardless of your decision, be sure to document your requirements and selection decisions and also create an architecture document of your new product that explains how the workflow functionality works, how it prevents requests from being implemented prior to being approved, and what security mechanisms are in place to protect the data store of approval information. All of this documentation should be posted to your audit repository so that the auditors have easy access to the information.