“Ultra-realistic Digital Human” has always held an esteemed position in the realm of graphics, aiming to establish an incredibly lifelike 3D human persona in the virtual realm. This would have the capacity to engage with users instantaneously, offering a deeply immersive virtual experience. The ultra-realistic host, “Qianyan” (「千言」) is pioneering this domain of digital humans.
“Qianyan”, an ultra-realistic digital human, is a joint creation of Soul Shell 「数字栩生」(Soul Shell) and Chengsha Keenbow Information Technologies Co., Ltd 「千博信息」(Keenbow), capable of spontaneously generating sign language for TV broadcasts and proficiently translating from Chinese to sign language and vice versa, fostering an uninterrupted exchange of information for the deaf and those with hearing difficulties. As a sign language anchor, “Qianyan” has signed contracts with hundreds of TV stations and also undertakes the role of in various scenarios, such as deaf education and the governmental accessibility service channels, further embodying its versatile utility.
Achieving seamless interaction in the realm of digital human interaction is critical for creating immersive experiences. However, this goal has been fraught with difficulties. To overcome these challenges and achieve real-time interaction for the ultra-realistic digital human, and Soul Shell built the “Brain Centre” of “Qianyan” in the cloud using advanced real-time cloud rendering technology. This groundbreaking approach enables “Qianyan” to accurately imitate hearing-impaired individuals in real time, allowing for high-quality, low-latency rendering for seamless sign language interactions. In addition, “Qianyan” has a remarkable ability to sing and dance in sign language. “Qianyan” represents the advancement of hyper-realistic digital humans that aim to improve the overall quality of human life in the real world, as a prime example of the frequent interaction between and reality.
Qianyan”, Soul Shell’s Ultra-Realistic Digital Human
The evolution of has been truly remarkable throughout the years. In the past, the construction of digital humans relied heavily on artists for tasks such as digital sculpting and bone binding. However, this approach posed challenges in achieving intricate expressions and meeting the high demands placed on artists.
The emergence of has brought forth a highly efficient solution. The technology can create digital human that are hyper-realistic, accurate, efficient, and cost-effective by utilizing algorithms and the merging of multi-angle photos of real actors. It not only enables the replication of dynamic expressions but also allows for the capture of facial optical and geometric characteristics.
Nevertheless, the creation of ultra-realistic is merely one component of the equation. The effective utilization of these virtual beings heavily relies on their ability to engage in “real-time interaction”. Without this capability, the application of digital avatars is limited to video-based scenarios.
Empowering Seamless Realtime Experiences in the Digital Human Era
can be classified into two main categories: Offline Rendering and Real-time Rendering. , which is limited by the constraints of hardware and software architecture, employs predetermined rays and trajectories to generate images, mainly for 2D virtual digital humans. On the other hand, excels at processing massive graphic data in real time. It computes each frame based on the actual environmental light source, camera position, and material parameters, making it well-suited for rendering 3D virtual digital humans efficiently.
The process of rendering a hyper-realistic digital human is intricate and involves various aspects such as material and texture rendering, light rendering, detail rendering, feedback, improvement, and final output. The time it takes to render a single frame of a virtual digital human can vary from several minutes to hours. To accomplish a 3D hyper-realistic virtual human capable of real-time interaction, numerous frames need to be rendered, demanding a substantial amount of calculations.
Paraverse’s LarkXR, is a dynamic cloud-rendering solution that exemplifies compatibility with and real-time rendering alongside other associated tech services. Excelling in meeting real-time criteria, effectively exploits the nearly infinite resources of cloud GPU computing. This facilitates smooth management of intense image rendering computations, which in turn enables hyper-realistic digital personas to interact on various terminals anytime, anywhere. In essence, LarkXR not only satisfies but goes beyond the demands of real-time requirements while harnessing the power of extensive capabilities.
Simulation Interpretation (From Cantonese to Sign Language) of Financial Secretary Paul Chan Mo-po’s Speech at the Press Conference by “Qianyan”
LarkXR: Revolutionizing Digital Human Access by Significantly Reducing Costs
The successful resolution of technical barriers in the production of “ultra-realistic digital humans” has paved the way for widespread promotion and implementation of this groundbreaking technology. The reduction in access cost to “ultra-realistic digital humans” is crucial in ensuring their popularity and making them more affordable for a broader user base. Moreover, affordable access to low-cost digital human technology facilitates its use in commercial applications, attracting a greater number of companies and to embrace this cutting-edge development. This, in turn, fosters substantial growth in the commercial aspect of digital human applications.
The development of “Qianyan” and its ultra-realistic digital human technology heavily depends on the integration of real-time interaction and high-precision content. This integration necessitates substantial power, which is typically only available on dedicated devices. Unfortunately, many of Qianyan’s users, who primarily rely on personal electronic products like mobile phones, tablets, and laptops, lack the necessary computing capabilities for local rendering. Consequently, finding ways to minimize or eliminate additional user access costs becomes crucial for promoting and expanding the application of ultra-realistic digital humans like “Qianyan”. By addressing this issue, we can ensure that Qianyan’s users can enjoy a seamless experience with AI real-time interaction while maintaining the highest level of realism.
“In the future, we aim to improve digital human development while lowering production costs. We want to make digital humans accessible to a wider audience while keeping demand and cost perfectly aligned” said Song Zhen, Founder and CEO of Soul Shell.