A lot of renditions for the future often include advanced teleconferencing technologies, where an individual is able to see, hear, and converse with other people as if they were present beside them. In most media and speculative fiction that I have seen, this is ordinarily described to function via holograms. To build such a technology in the real world, you'd likely need two main systems, one to record yourself, and another to holographically project the other parties whom you are meeting with. (There's also the transference of the data, but with the internet, that's pretty much a non-issue.) But from what I know, holographic technologies are pretty inconvenient. Scanning/recording a real-time 3D model of yourself cannot simply be done anywhere and anytime you please. Projecting 3D holograms also aren't too easy from what I know. Hence, both the things needed for this futuristic teleconferencing technology—the recording and the receiving of holographic content—are inconvenient and tricky to accomplish, and will remain so for the foreseeable future (I haven't really been keeping up with holographic tech lately, so correct me if I'm wrong here).
The biggest drawback of this technology will be that it can't be used on the fly. To successfully perform such a holo-call, one would likely need specially made areas or rooms for recording and playing the holograms. There could be cybercafe-styled parlours that have these rooms and allow users to call other users around the world. It's quite inconvenient compared to modern video calling software that you can use from anywhere, anytime, but I can see people still wanting to use these—especially to call family members or significant others that live far away and whom you cannot meet in person for extended periods of time. How viable a business model of opening such a cafe would be, I do not know, but I can see people wanting to use such services from time-to-time.
I can also see people who are moderately well-off or on the wealthy side installing this holo-calling apparatus in their homes for more ease of use.
This technology could also be used for offices, but again, I don't see it being used too frequently. A lot of current video-calling technology that just use simple webcams already have their prerequisite infrastructure set up, and are also easy to use. Holo-calling doesn't have too many advantages over it from what I can tell, so while it may be preferred in certain situations or by particular people, software like Zoom will still probably be more efficient and easier to use.
In the end, no matter how cool the tech may seem, recording oneself and playing your contact's holograph well will be just too difficult and troublesome in most situations, and outright impossible in others, such as on public transport (a great deal of people use video-calling on buses and trains).
I found my old PS Vita the other day, and I was replaying Metal Gear Solid 2 when I noticed that the characters in this game use something called the Codec to call associates. They use nanotechnology in the ears and radio frequencies to get the audio aspect to work. However, they can also seem to see the other person.
The entire conversation is happening in their head, and there are no cameras around at all, but these people can seemingly both see and speak to the other person face-to-face, and they can do this while on the move, to boot. Now I know this is a game, but this got me thinking about how amazing it would be if we could build something to do this too. It would solve virtually every single problem I outlined about the Holo-call tech. It would be convenient, mobile, and could be used for multiple people everywhere without needing special rooms. What would we require to create something like this?
After a while, it occurred to me. Out of the two problems of the Holo-call tech, one was easily solvable via a technology that should be becoming more common in the near-future: AR glasses. If you had these, you did not require Holographic projectors or anything fancy. All you needed were AR glasses, and you would be able to see anything as if it were right beside you. With this, a major hurdle of needing special projectors on-hand at the location you intended to call the other person is removed. Thus, one of the two biggest issues I initially saw was slashed.
The problem of recording would still persist however. You would still need real-time information about the other party in three-dimensional space to appropriately project them into your AR-environment. Meaning you'd need a fancy setup anyway. You could also use a phone camera or a webcam, and while it would work, there are two issues. You would not be able to remain hands-free, and while you could see the other person face-to-face, it would still be like looking at a traditional smartphone viewport, except on your AR glasses. You wouldn't be able to project that person onto your 3D AR environment.
After some more thought, I have another proposed solution for that. It's a little far-fetched, but bear with me.
Brain Computer Interfaces (Wearable, Non-Invasive, Read Only)
What if we were to use BCIs to get your information? I mean high spatial and temporal resolution BCIs that can record most of your brain data in real-time. Alongside DL algorithms trained to interpret that data, couldn't we get comprehensible sensory information about what the body is doing in every passing moment? Proprioception data would tell the algorithms how a person is moving, the expression they are making, etc. The person's own visual and other sensory information would relay more information, say, what kind of clothing they had on, for one. Now as long as the system has a preexisting high-definition 3D model of the person beforehand, couldn't all this be used in tandem to produce something along the effect of being able to project the other party into your AR environment as if they were there in-person?
How much computing power would this require?
Another way to record yourself would be to use VR motion tracking suits and having your call in a VR world. This would be relatively easier to achieve in the technical sense, but I don’t see too many people frequently using this, since putting one on and removing it just for a call or two will be too much of a hassle to use too often. Also while the suit can pick up your gestures and motion, it won’t be able to track facial expressions, which as I’d imagine, are one of it’s greatest drawbacks. It could be used by gamers and enthusiasts to meet friends, but you’d likely need to use a virtual avatar, rather than appear as you are. BCIs are more difficult on the technical side of things, but if all users needed to do to use this was to just put on a helmet, I can see most people picking this over motion-tracking suits.
For the BCI technique, a lot also depends on the scanning tech used to create the initial model of the person, and the state of realism that technology can produce. Games nowadays are very realistic but you can still tell it's a game, in part due to their animations and movements just not being as smooth or flowing as well as the real world. I’m not sure how many people feel this, but no matter how realistic games look, the movements of the characters still feel artificial and puppet-like. With brain data + deep learning, I expect this to get better in time.
Now even if this all were to work, you still require an external apparatus of two different items, sitting on your head and face. It's still not as free as the system displayed in Metal Gear. That is, until...
Brain Computer Interfaces (Implant, Invasive, Bidirectional)
This is where truly futuristic systems can come into play. You can now lose both the helmet and the glasses; everything is implanted under the scalp. Your neural data is read and the data of your contact is projected straight to your visual cortex. This is the ultimate teleconferencing experience, hands-free, mobile, and thought-controlled. With this, you can also move up a step from AR and start meeting in entirely VR worlds without the need for suits or extra motion tracking hardware.
Sadly, we have a ways to go until this is feasible, unlike my previous scenario with non-invasives combined with AR glasses, which should be possible in the near-future.
What do you think?