Optical networking is light based communication over a transmission medium through which electromagnetic waves propagate (1). The popularity of optical networking today is a result of technological breakthroughs in transmitting light over long distances, multiplexing light signals for carrying enormous amounts of data, and the Ethernet Protocol, which is the dominant protocol for how networked devices communicate over a LAN.
In 1854 John Tyndall first showed that light could be conducted through a curved stream of water, proving that a light signal could be bent. That was just the beginning. A few years later, Alexander Graham Bell invented his ” Photophone ,” which transmitted a voice signal on a beam of light. Although
Bell showed that data communications using light was possible, he eventually abandoned his invention because he couldn’t overcome problems with light interference and attenuation. Attenuation is the loss of transmission signal strength measured in decibels (2). As light is transmitted, it bounces along a very thin strand of glass called an optical fiber. Optical transmission operates on a principal known as “Total Internal Reflection”. The light needs to be contained on the path. One of the frustrations of optical networking is that under real world conditions, it was hard to keep light on a contained path. It wasn’t until 1954 that Abraham Van Heel, covered a bare fiber with a transparent cladding of a lower refractive index. This protected the fiber reflection surface from outside distortion and greatly reduced interference between fibers. At the time, the greatest obstacle to a viable use of fiber optics was in achieving the lowest signal (light) loss (3). Cladding effectively solved that
In 1970, Robert Maurer, Donald Keck, and Peter Schultz of Corning Inc. designed the first optical fiber with optical losses low enough for wide use in telecommunications. This optical fiber was capable of
carrying 65,000 times more information than the copper wire that was widely used. In a 1978 paper, a concept called wavelength-division multiplexing was discussed. While optical multiplexing was first proposed in a 1958 in paper for IEEE by R. T. Denton and T. S. Kinsel called
“Terminals for a high-speed optical pulse code modulation communication system: II. Optical multiplexing and demultiplexing”, the 1978 paper took that a step further. It described technology which multiplexes a number of optical carrier signals onto a single optical fiber by using different wavelengths of laser light. This technique enables bidirectional communications over one strand of fiber, as well as multiplication of capacity (4). To give you an idea of the bandwidth increases wavelength-division multiplexing produces, the FASTER project, which consists of Google and five other companies, joined together to lay out a new trans-Pacific submarine cable (5). Using WDM technology, FASTER delivered an initial information capacity of 60 Terabits Per Second (Tbps) between the US and Asia. This is substantially more than the 4 Tbps we typically see in similar trans pacific cables (6).
While the cumulative achievements dealing with overcoming the physical barriers of fiber optic networking were happening, an important protocol was being developed that also had a major impact on optical networking. In 1973, at Xerox Corporation’s Palo Alto Research Center (PARC), researcher Bob Metcalfe designed and tested the first Ethernet network in his plight to avoid walking from his computer to the printer with a disk. While this initial network was not built on fiber optic technology, Metcalfe developed the standards that governed communication on the cable (7). By 1980 the Institute of Electrical and Electronics Engineers (IEEE) proposed the first criterion to standardize LANs (Local Area Networks) and MANs (Metropolitan Area Networks) for the purpose of achieving barrier-free
communication. Today, the Ethernet standards can be mainly classified to eight parts, 10 Mbps Ethernet standards, Fast Ethernet standards, Gigabit Ethernet (GbE) standards, 10 Gigabit Ethernet (10 GbE) standard, 25 Gigabit Ethernet standards, 40 Gigabit Ethernet standards and 100 Gigabit Ethernet
(100 GbE) standards (8).
For networking, fiber optics provides more bandwidth, speed, and security than it’s copper counterparts. While cost may be prohibitive, for every part of your network, it’s often applied to network segments that require high throughput data communications. Thankfully, because of the ethernet protocol, fiber and copper network segments are able to communicate with one another.
(1) “Optical Medium.”Wikipedia, Wikimedia Foundation, 7 Sept. 2019,
(2) “What is Attenuation and How Does It Affect Your Connection?”, iTel Networks , July 27, 2015,
(3) “How Fiber Optics Was Invented”, Mary Bellis, updated June 3, 2019,
(4)”Wavelength-division multiplexing”, Wikipedia, Wikimedia Foundation, last edited on 19
September 2019, https://en.wikipedia.org/wiki/Wavelength-division_multiplexing
(5) “Google Invests In $300 Million Underwater Internet Cable System To Japan“,Amit Chowdhry,
Aug 12, 2014, https://www.forbes.com/sites/amitchowdhry/2014/08/12/google-invests-in-300-million-
(6) “Massive Bandwidth to Come From Wavelength Division Multiplexing and Coherent
Communications”,mathscinotes / Math Encounters Blog , 14 August 2014 ,
(7) “How Ethernet Works”, Nick Pidgeon, 1 April 2000. HowStuffWorks.com.
(8) “Ethernet Standards for Optical Fiber Networking”, www.cozlink.com,