This interview is an excerpt from our report titled “Optical transceivers: The new gold standard in data communications by 2030.”
The volume of IP-data transmission continues to surge due to the proliferation of 5G networks and expanding reach of cloud computing. Consequently, the demand for efficient data transfer capabilities is reaching unprecedented levels.
Conservative estimates project a staggering increase in IP-data transmission of ~149 zettabytes by 2024. This exponential growth demands innovations in technologies capable of accommodating the demand for high-speed, long-distance data transmission.
Prescouter talked with Roberto Panepucci, a top-tier researcher at Cornell Nanoscale Science & Technology Facility. Roberto is leading the way for advancements like all-optical silicon switching.
We interviewed Roberto about the opportunities and challenges posed by increased data transmission demand, with a focus on the role of optical transceivers as a key technology.
This interview covers the following topics:
- Changes to platforms and their impact on features or functionalities
- Speed-distance trade-off in data transfer
- About channels, form factors, protocols on data transmission speed-distance, and transceivers
- Awareness of the optical transceiver market
- Digital coherent transmission and its role
- Indium Phosphide HBT architecture for transceivers
- Use of comb sources in optical transceiver technology
Changes to platforms and their impact on features:
Q: How well-suited are existing optical transceiver technologies to handle the surge in data demand over the next 3-5 years? Will entirely new solutions be necessary?
A: So, as you follow the commercial literature, companies are still using some, let’s call it ‘standard approaches.’ There’s a segregation between datacom for 5G, 6G, which is slightly different from what’s needed for data centers. Data centers rely on a quick time to market and lifetime is not as critical. Reliability is important but not as critical as some of the more lasting infrastructure demands if you’re looking at longer in the future.
Beyond 3-5 years, there’s going to be a change in paradigm for fundamental devices for telecom because people are going to start trying to reduce latency. That means different wavelengths and whatnot. Some of the fundamentals as far as silicon photonics goes and other integrated platforms will remain there, but a few other things are changing.
There’s a big issue with price point as far as packaging goes because that’s certainly where there’s not been enough research. So, recently here in New York, they’ve just announced a big investment, part of the whole trend to boost packaging. It’s really not on par with what’s needed but still it’s a big effort.
Q: You mentioned some changes to the platforms. Could you elaborate on what those changes are? Are there any specific features or functionalities that will be impacted?
A: For anything, that’s not a data center. Some of the critical applications are very reliant on beating your competitor as far as latency goes. So, that means getting data from point A to B. You can do a lot with electronics, but there’s a fundamental limit you will need to increase the speed of light.
For that purpose, it is advisable to use hollow core fibers, so that you’re propagating light and air in vacuum, which come with certain challenges. If you want to explore the full potential, then you want to use longer wavelengths and strategies that match differently with what’s available.
Q: Would this enable speeds beyond 100 Gbps? Am I reading that correct?
A: That’s not the point. The point is for when somebody clicks on a financial transaction, that’s going to start a trade war of sorts. But you need to be there several milliseconds before your competitor. So, it’s delay time, that’s the issue, not data rates.
In terms of data rates, we already have several demonstrations over 100 gigabits per second on some commercial modules for 400, 800. And the issue there is how do I cram enough power?, how do I arrange things such that I can use the fanciest algorithms to recover data that might suffer from some kind of dispersion?, and how do I reduce the cost of integrating my sources, my detectors? That’s where there are some new interesting ideas.
Speed-distance trade-off in data transfer:
Q: Is there a clear speed-distance trade-off in data transfer?
A: Yes, the speed-distance limit is a fundamental limitation. So, the quality of your source is critical in having enough signal-to-noise to be able to recover data using clever algorithms. The speed-distance limit is an issue but it’s not something that devices nowadays can’t handle in a way. And that’s when you might see a reconvergence of this hollow core fiber as the technology that could potentially reconverge the speed-distance relationship.
Currently, the losses on those fibers are so high, that’s not an issue because it’s not realistic for the short term. But later in the future, when manufacturing of the actual fiber reaches a point where distance is achievable and you find a way of adding devices that can amplify signals for long distances to overcome losses then you do have a potential solution. This solution needs to represent a much better medium to propagate light without dispersion, without incurring such penalties for the distance.
About channels, form factors, and protocols:
Q: How do channels, form factors, and protocols impact data transmission speed and distance? Additionally, is there a pluggable QSFP or similar transceivers?
A: Channels represent different wavelengths of light used to transmit information. Originally, repeaters were needed to strengthen the signal at certain wavelengths. Now, thanks to broadband amplifiers, such as Raman, all the wavelength channels can be boosted at once.
So, platforms that can handle multiple channels have more latitude. So, yes, the number of channels is very important.
For small form factor modules, you need advances in technology specifically from the light source point of view because until not too long ago, for each channel you needed a laser. Can you imagine if you need 16, 32, 64, or 128 channels, can you set as many lasers?.
Now there’s something called comb sources and those enable you to generate a large number of finely precise active channels, let’s say, that you can then individually modulate with integrated optics on a small form factor. That is really the key thing that’s going to win markets in the short term.
Q: The laser sources you mentioned, those come from III-IV semiconductors?
A: Typically, those are the integrable ones where you can make very tiny bars of lasers. Well, it’s been 15 years since people were saying, “Oh, yes, in a few years, we’ll integrate this on silicon.” And it’s been taking a lot more than that, but I think there are now enough vendors with the skill set to do that that it’ll become a reality.
Q: Do protocols play a role in the transmission?
A: Insofar as they enable you to do better error correction, they do. But again, to a point where it’s not so much about some fundamentals as it is, we’re going to call it a strategic decision because you have costs, you have forward-looking strategies to reduce your development of non recurring engineering for new devices.
Awareness of the optical transceiver market:
Q: Do you follow the optical transceiver market?
A: I’ve seen some consolidation. I saw with sadness some companies that were interesting players to disappear. I was following Molex, basically technologies that were using the low confinement strategy because I didn’t see the need for such high density and I thought those could be potential winners because of the significantly lower cost in packaging. That’s what I thought was going to make a difference, having worked on the high confinement silicon photonics and seeing how hard it is to do that.
So, no, the acquisitions that followed the initial startups that worked on the space, Luxtera, all those guys, I think tell me that some of the big players are already well positioned and startups really have to come up with some of this innovative approach of comb sources and other things to try to take an advantage.
Q: Do startups have a role to play in the development of novel devices?
A: Yes. Big companies (Intel, Cisco) bought companies that had a lot going on and of course there’s inertia with some of those initiatives, so startups bring a threat that is eventually recognized as a possibility and who knows.
Q: Following your point about advanced algorithms and DSPs (Digital Signal Processors) enabling efficient transmission speeds, are there specific companies leading the way in exploiting this technology?
A: I did follow this a few years ago and there were a few companies that were very deep into the ASIC development of solutions already, very close to state of the art as far as the silicon goes. But I was looking at this from the point of view of design, so it surprised me.
There were companies in Argentina, companies with subsidiaries in Brazil and all of them based in San Jose, but still teams of designers from these other countries that I was really impressed with. But I can’t say I paid enough attention to that part to have an opinion here.
Digital coherent transmission and its role:
Q: Is digital coherent transmission a truly innovative approach for building optical transceivers, or is it primarily a marketing term? What would be its role?
A: Coherent transmission is big. It has been for 10 years. It’s basically how your FM radio works. In FM, somebody somewhere, and you don’t know where, has an antenna and they send a signal. At your own location, you tune some frequency and what you do is you mix your frequency with what they’re mixing which is really high frequencies, hundred megahertz, which is not so high. As far as electronics go, that’s pretty high.
And what you get is something that is intermediate frequency and that’s the approach where nowadays lasers can be made with such precision, they can actually do that. You can have a local laser that when you mix with whatever light is coming from 1,000 miles away, the mixing of the two lights produces an electrical signal with a frequency you can actually measure.
And that’s where the coherence goes. The fact that the color of the light from the two sources, what’s transmitted and what’s locally available is coherent over enough time that you can observe the minute differences between them. That’s sort of what it is. That’s very important.
Your previous question about protocols and DSP has to do with exactly extracting relevant information from these coherences unless it’s something else. It could be a different buzzword because you said digital coherence, and so, it might be a way of encoding information. Say, you can more easily deconvolve.
Indium Phosphide HBT architecture for transceivers:
Q: In terms of the transistor itself, Indium Phosphide HBT, also came up on our searches. Are there companies or players exploiting that architecture to build transceivers? Would that work?
A: Yeah, that’s something that my PhD group of my previous adviser had done, indium phosphide HBTs and photodetectors PIN and avalanche and whatnot.
So, there are two spaces. There’s one where you try to do things on silicon-related platforms. Those are typically non-active. You have to add detectors, which people have integrated now with germanium, but you definitely need to add a source. It used to be that you mandatorily had to co-package or hybridize some laser.
The indium phosphide, the III-V-based materials, also work as a platform for integration. And they are great because they give you a bunch of different wavelengths all into one platform potentially to use selective array epitaxy or other things that allow you to change the band gap.
And so, there are competing strategies and the III-V packs a lot more punch as far as active devices go, enabling even amplifiers-on-chip. So, that is very desirable.
But it comes at a cost because so much of the fabrication is unique to that material space that people that manufacture that type of device were typically stuck with fabricating discrete devices like lasers and detectors and amplifiers and they’ve tried to come out to higher volume applications such as this.
There would always be opportunities for them. After all, laser sources have not yet been invented with enough efficiency to overpass them at all. So, there are contenders. They’re indeed important.
Use of comb sources in optical transceiver technology:
Q: What about comb sources?
A: So, comb sources, they use nonlinear effects on silicon resonators or whatever novel nonlinear materials and they are fundamental to allow you to enable a lot of laser lines, so you can have channels. So, comb sources are the new kids on the block. Comb sources are still in academic labs
There’s something that I’ve seen as far as putting things on a spreadsheet that somebody should look at more carefully. I wrote a paper on this a long time ago. I think the expression would be on-chip external cavity lasers and this is where you don’t put the laser onto your chip. You put the gain medium like the chip that will just give amplification and everything that has to do with selecting the wavelength, making it stable, etc., is done on a separate part of the device. I am referring to external cavity lasers. There are startups in Europe and in the U.S that are already trying the technology. It is a competition but it predates comb sources.
Conclusion:
Roberto Panepucci’s interview focuses on the evolution of optical transceiver technologies due to increasing data demands. He explains that as traditional data transmission methods reach their performance limits, new solutions such as comb sources are essential. Roberto discussed how new technologies are being developed to improve data communication and its transmission in the near future.
Disclaimer: The views and opinions expressed by the interviewees are solely their own and do not represent or reflect the opinions, policies, or positions of PreScouter, nor do they imply its endorsement.