Speeds will forever increase

One Gigabit = 1000 Megabits = 1,000,000 Kilobits = 1 Billion Bits

Speed isn’t everything, but it is the main thing. Internet and other data speeds will increase. Networks will have to improve to keep up. It is hard to imagine today, but the maximum Internet access speed in 1980 was 2400 bits per second (bps) over a dial-up modem. At that rate an average HD movie file (around 16 billions bits) could be downloaded in about half a year. By 1998 dial-up modems had grown to 56 kilobits per second (kbps), at which rate an HD movie would arrive more quickly—7 days. Broadband (speeds above dial-up modems) became common in the early part of this century, rising so fast that the FCC defined broadband as 10 million bits per second (mbps) downstream (to you) in 2010, a speed enabling that same movie to show up in 27 minutes. The FCC ratcheted broadband up to 25 mbps in 2015, but average nominal Internet speeds today run around 90 mbps, a rate enabling that movie download in 3 minutes. Not bad. But for reasons we suggest below, we are headed for a gigabit world, homes connected to services with one billion bit per second (gbps) nominal speeds, at which rate that movie will take 16 seconds, enough time to turn on the TV or get a glass of wine or water. Such is the brave new world of broadband.

Three forces pushing speeds up reflect the past. One, video quality keeps getting better; each improvement demands higher data rates. Two, we seem to find more videos and other things to store, ballooning file sizes; many will beg for upstream data transfer in minutes rather than days. Three, working at home will grow; with that growth will come critical demands on upstream bandwidth, the Achilles heel of copper-based networks. But two of the pressures are new. The Internet of Things will explode the number of devices in everyone’s home demanding access to a single router; that router will have to be connected to an ever increasing data connection. And applications are coming that demand much shorter delay between sending a call and getting a response from a server, what is called latency.  Lower latency not only requires higher data rates, it requires much less sharing of lines. It also demands servers closer to users, and in some cases dedicated servers. Future forms of Virtual Reality, holographic video games and video conferencing, telemedicine machines in our home to keep us alive, systems our kids will use to learn by will all demand it.

Speeds Required for Various Streaming Video Formats

(Warning: digital television and digital video have given birth to a bewildering range of parameters that enter into user experience—frame rate, refresh rate, compression, interlace versus progressive scanning, color density, motion interpolation, backlight scanning.  They arise from the historical circumstance that previously conventional movie frame rates of 24 per second and previous analog televisions that refreshed half of the screen every 30 seconds, called interlacescanning, do not provide best user experience for video games and high resolution movies or other videos.  Digital video expands the options significantly, with frame rates up to 60 (or more in some cases), refresh rates up to 120 (the higher ones claimed by TV manufacturers do not mean much), with digital compression required to transmit video over today’s networks, and lots of things designed to make up for the distortions such compression introduces.  The figures given here are generally high, assuming 60 Hz frame rates and progressive scanning for transmitted material.  Many streams from Netflix, Hulu, and others will be considerably less, with corresponding reduction in picture quality.)

When television screens went digital, they became subject to technology improvements.  Better picture quality came from increasing the number of pixels on a screen.  The first digital screen supported Standard Definition television with 640 pixels across the screen and 480 pixels down.  Painting every other pixel with a 16-bit color 30 times a second (the minimum with interlaced scanning) would require a data rate around 75 mbps (480 x 640 x 16 x 30 divided by 2) a figure possible now but unthinkable in the late 1980s when digital television was making its debut.  So engineers figured out ways of compressing the digital signal, with some loss of fidelity, to around 1.5 mbps.  You have no doubt seen the effects in screen tiling and distortions when people move too fast.  The actual rate for any given video varies a small amount depending upon things like how fast the images change and how fast the colors change.

But that was a small television with interlaced scanning.  Television screens got bigger and interlaced scanning gave way to progressive scanning where every pixel was painted for every scan, at a rate of 60 times a second, not 30.  Compression got better as well (otherwise we would not have the better televisions).  The next generation was High Definition TV (HD), with a screen measured by 1920 pixels across and 1080 pixels down.  Bits per pixel also grew from 16 to 24 or 32.   In RAW form such a screen could require a data rate of almost 3 billion bits per second (gbps).  Compression miraculously reduced the data rate now in use to around 7 mbps, with some infirmities in quality but a data rate sustainable over cable television networks.

Some screens got smaller but more dense.  The iPhone X has a pixel space that measures 1125 x 2436.  Furthermore, colors have wandered up to 32 bits for some video experiences.  Painting every pixel on an iPhone X at 32 bits 60 times requires 5.26 gbps.  Radical compression can get that under 10 mbps with loss of fidelity.  With video 75% of Internet traffic now, mobile networks are under real pressure to upgrade antenna capacity.  Television screens are also not standing still.  The next size, designated Ultra High Definition Television (UHD) provides 3840 pixels across and 2160 pixels down.  The horizontal pixel size has given the name 4K to these screens.  The screen dazzles for sports, fast-paced video games, nature shots, and anything else improved by greater pixel density.  Data rates after compression to paint these screens can rise to 30 mbps.  Any TV over 50 inches today is 4K. We are not through.  Television companies have demonstration models now with pixel spaces of 7680 x 4320, called 8K in the market vernacular.  This screen will require from 60 to 90 mbps per video experience.

Cable television networks are now stuck.  CATV networks devote a fixed amount of bandwidth to television channels.  They use modems to digitize this bandwidth and send digital televisions signals down this bandwidth.  Its total capacity is suitable to the present arrangement of Standard and HD television, but is at its limit.   So they have no capacity to send broadcast UHD (4K) material.  They are similarly limited on video-on-demand, that bandwidth fighting with increased demand for Internet channels.  According to reports cable television companies plan to upgrade their networks to eliminate the bandwidth devoted to sending all 220 channels to everyone, creating what amounts to a full video-on-demand system, hoping to free up bandwidth for non-broadcast television services.  But this is extremely expensive and as far as we know has not even begun anywhere (it changes many things in their system).

Uploading Files

(Files sent to a user from a server is “downloading.”  Files sent from a user to the server is “uploading.”  Most applications to date have favored much higher download requirements than upload.  Streaming video is almost all download, the occasional command requiring very little.  Building a web page has a call and response protocol, but the calls are small relative to the responses.  Most files today are downloaded, not uploaded.  So networks today are highly “asymmetrical,” much higher downstream rates than upstream rates.  The FCC defines “broadband” today as 25 mbps down and 3 mbps up.  CATV networks are migrating downstream rates to 1 gigabit in select areas (peak, shared among as many as 100 users), but holding upstream rates around 10 mbps.  Over time this will be a major handicap.  Fiber optic networks, with almost indefinite capacity, are typically deployed with symmetric data rates, roughly the same speed in both directions, with much less sharing or no sharing.)

File sizes have exploded.  With image and video storage eating up gigabytes at ever increasing rates, laptops are sold now with up to 4 terabytes of storage; that is 4000 gigabytes or four trillion bytes of storage (a byte is 8 bits).  Uploading 4 terabytes to Cloud storage would take at least 123 days at the FCC-defined upload speed of broadband today of 3 mbps.  At fiber speeds of 1 gigabit per second, the time comes down to about 8 hours, still a long interval, but not out of the question.

The push has come as much from video and image files as anything else.  Large video files have become commonplace.  Uploading a file with 50 HD videos (about 250 gigabytes) at today’s 3 mbps would take 7 days.  Should upload speeds migrate to 50 mbps, it still takes 11 hours.  But at 1 gbps, the entire file moves to the Cloud in 33 minutes.

Slow upload speeds actually deter many useful and important applications.  If everyone had a gigabit upload rate instead of the snail-like 3 to 6 mbps rate, many more people working in information intensive businesses could work at home.  High quality video conferencing would become commonplace, making collaborative processes more effective and plentiful.  Instant cloud back-up would insure that no work effort or desirable information would ever be lost. Telemedicine machines that capture huge amounts of information would have response times that on many occasions would save lives.  Very high performance multi-player video games could be played over the Internet.  Videos could be shared with the same degree of quality and frequency as we share pictures today.  The Internet of Things, an upload glutton, will actually work.

Internet of Things (IoT)

Lawnmowers can now tell you when they need oil or sharpening, refrigerators can tell you when you are out of milk, and will email you if your 8-year old accidentally leaves the door open.  Myriad devices enable control of doors, windows, blinds, security cameras, lighting, and heating.  In a few years it will be hard to buy anything for a home, even a new pot or stereo speaker, that will not have a sensing device, a battery, a microcomputer, and a communications facility that will talk to its manufacturer about how it is doing or what it is doing, or talk to your computer or iPhone for status or control.  All of these devices will need access to your home WiFi for connection to your cell phone or computer app and to external resources.  Most of them will require low data rates, but when the number of such devices sails past 100, and it will, or up to 500 as some ambitious market reports suggest, the cumulative data rate starts to look fearsome.

The home of the future (not too far away) will likely have IoT devices connected together in a wireless meshnetwork where each device links to two or more other devices. Such a network can relay packets from one place to another until they reach a desired end point—either a thing, a local computer, or a router connected to the Internet.  Mesh networks beat the problem of signals not going through walls as will often happen in a more conventional star configuration, all devices talking to one router.  The communication protocols for very low-power short-distance transmission already exist (IEEE 802.15.4 or 802.11ah).  Networking protocols such as ZigBee also exist for low-power, low data rate, short distance local networks.  Battery and sensor technologies have reached points already where a container smaller than a pea includes a sensor, a battery, a low-power microcomputer, and a low-power transceiver.  They will be in all manner of devices, in wearables such as watches and medical attachments, and in our bodies.  There will be charges of Big Brother and invasion of privacy, but this will happen in some way we deem acceptable, and sooner rather than later.

Some of these devices will require immediate attention at unpredictable times.  Medical alert systems, burglar alarms, and fire alarms lead the list, but future things will be able to give advance warning before something bad happens.   As the dominant direction of data flow from the Internet of Things is from the thing that collects information and passes it on, its pressure on data rate will be in the wrong direction for today’s highly asymmetric channels.  To avoid traffic jams in the house and to provide high probability access for emergency signals, upstream data rates are going to have to grow dramatically. Fiber to the Home will be the answer.

Latency

Latency refers to the time it takes a packet to transit a network link twice, from you to the server and then back from the server to you.  It is also called delay, or Round Trip Time (RTT), or Ping.   Latency is affected by link speed, switch transit speed, switch congestion, server processing time, packet size, the distance the packet must traverse, and the medium through which it must traverse. Each adds to the time.  Which dominates depends greatly upon conditions.  Network latency is expressed in milliseconds (ms), one thousandth of a second. Latencies in today’s U.S. networks range from 15 to 75 ms; they may range on any given line by tens of milliseconds, the range caused by traffic at switches and link congestion for highly shared links such as CATV networks.

Light and electromagnetic signals only travel at the “speed of light” (186,282 miles per second) in a vacuum.  Light down a fiber optics transmission line has to contend with impurities and light bouncing along walls of the fiber that slow things down to around 70% of light speed (special fiber optics with air in the middle get well above 90%, but they are very expensive and restricted now to undersea cabling).  Electromagnetic signals flowing down a copper line can be as slow as 60% of light speed.  Except for signals going to a satellite, all terrestrial signals also encounter amplifiers (repeaters) and switches in the path.  The measured time through the transatlantic fiber optic cables from the United States to Britain is about 60 milliseconds (one thousandth of a second), which cannot be improved upon (the cables include repeaters).  Transcontinental latencies are similar—63 ms from San Francisco to New York, say.  Of this figure 37 ms come from the length of the fiber optics wiring, the rest from switches and repeaters.

Latency has always been a problem, but one hidden from common view.   A web page is built through a series of calls and responses, calls from the computer, responses from the server.  There is a premium on getting the first byte there fast so a page build can start, and an equal premium on making the whole page appear as quickly as possible.  Latency extends the time to build at every call and response.  Web page designers do everything in their power to limit the number of calls during the page build.  While the Cloud in its primary application has centralized computing and storage, the “Clouds” offered by Google, Apple, and Amazon depend upon Content Delivery Networks (CDN) that locate data centers at the edge of a global network, all over the earth, which centers contain duplicate files for all content of interest, precisely to get close to a customer location so delay, or latency, is minimized.

While IP packets can be as large as 65,535 bytes, the practical upper limit (and now average large Internet packet size) is the 1500 byte Maximum Transmission Unit of Ethernet, which carries IP traffic over home and business routers and local area networks.  The full maximum of 65K adds 20 ms of delay at 25 mbps transmission rate, but only 0.5 ms at 1 gigabit.  However, at the lower common rate of 1500 bytes, delay added by 25 mbps link speed is less than 0.5 ms.  What is hard to compute as a commonplace is the effect of total packets required to send an information unit before a response can be generated.  Without compression a typical advertising image of 728 x 90 pixels with 16 bit color requires 260 packets; if compressed best chances are 10:1 ratio, or 26 packets, which takes more than 12 ms at 25 mbps to transfer.  However, at 1 gigabit, the transfer rate is 0.3 ms.

We offer these numbers because a variety of future applications will demand predictable latencies below 10 ms.  This is the figure claimed by people working on remote Virtual Reality to avoid nausea and some telemedicine equipment designed for remote surgeries or rapid file transfer during emergencies. Autonomous vehicles need latencies closer to 1 ms.  Four things become apparent from the material above.  One, the server must be close, within a hundred miles or so.  Two, the data rates have to be in the gigabit range in both directions.  Three, there cannot be many switches and repeaters in the way.  Four, server processing has to be quick.  These system pieces do not exist now, but the applications are clearly in the wings; they will force data rates and latencies to improve considerably.  From the perspective of the last-mile network, fiber optics will be a key component to make these applications a reality.

In Short

The transition from dial-up modems in 1978 at 2400 bps as the top speed to dial-up modems in 1998 at 56,000 bps as the top speed to DSL and cable modems running from 2 to 8 mbps as top speed in 2008 to cable modems and advanced forms of DSL running at 50 mbps top speed in 2018 will continue.  At some point in the not too distant future cable and telephone networks, relying as they do upon copper lines, will be unable to keep up.  Fiber optics all the way to the home or business will be the answer, with nominal data rates in both directions of 1 gigabit per second a kind of standard offering.

Northwest ConneCT