Return to site

Turning Chairs into Tables: Novel Deep Learning Method for Unpaired Shape Transforms

· Technical Papers
broken image

TECHNICAL PAPERS PRESS RELEASE

Brisbane, Australia, 18 October 2019 — Turning a chair into a table, or vice versa, might sound like somewhat of a magic trick. In this case, zero magic is involved, just plenty of complex geometry and machine learning.

Called LOGAN, the deep neural network, i.e., a machine of sorts, can learn to transform the shapes of two different objects, for example, a chair and a table, in a natural way, without seeing any paired transforms between the shapes. All the machine had seen was a bunch of tables and a bunch of chairs, and it could automatically translate shapes between the two unpaired domains. LOGAN can also automatically perform both content and style transfers between two different types of shapes without any changes to its network architecture. [Refer to image below.]

broken image

The team of researchers behind LOGAN, from Simon Fraser University, Shenzhen University, and Tel Aviv University, are set to present their work at ACM SIGGRAPH Asia held Nov. 17 to 20 in Brisbane, Australia. SIGGRAPH Asia, now in its 12th year, attracts the most respected technical and creative people from around the world in computer graphics, animation, interactivity, gaming, and emerging technologies.

“Shape transform is one of the most fundamental and frequently encountered problems in computer graphics and geometric modeling,” says senior coauthor of the work, Hao (Richard) Zhang, professor of computing science at Simon Fraser University. “What is new and emerging is to tie this important problem to deep learning—can a machine learn to transform shapes, particularly under the unsupervised or unpaired setting?”

In this work, the researchers turned to a well-known technique in machine learning, Generative Adversarial Network (GAN), for unpaired general-purpose shape transforms. Their network is trained on two sets of shapes, e.g., tables and chairs or different letters. There is neither a pairing between shapes in the two domains to guide shape translation nor any point-wise correspondence between any shapes. Once trained, the researchers’ method takes a point-set shape from one domain, a table or a chair, and transforms into the other.

LOGAN overcomes a key challenge in shape transform techniques. Given two sets of shapes—chairs and tables—it is challenging for the network to learn which particular shape features should be preserved or altered to result in realistic transformation of the object, from chair to table and vice versa. The team’s method learns the unique differences in features and can automatically determine which features should be kept or discarded in order to achieve the desired shape transform, and can do so without supervision.

Other techniques in computer vision for unpaired image-to-image translation have been developed and have been successful in translating style features, but most have not achieved shape translation. “In 2017, CycleGAN and DualGAN, two highly influential works from computer vision were developed for unpaired image-to-image style translation. LOGAN specifically produces realistic shape translations, both in style and content, for the first time,’’ notes Zhang. Additionally, the researchers demonstrate that LOGAN can learn “style-preserving” content transfers. For instance, the network can automatically transform a letter ‘R’ into a ‘P’ of the same font style, or with respect to style translation, their method is able to translate a bold-face letter ‘A’ into an italicized ‘A’.

To devise their method, the researchers train a neural network which encodes the two types of input shapes into a common latent space. In deep learning, the latent space is represented by the bottleneck layer where the network captures the features of the

input data. LOGAN is not only trained to turn a chair code to a table code, but also trained to turn a table code to the same table code. The latter enables “feature preservation” and helps maintain certain table features during chair-to-table shape translations.

In ablation studies, the researchers demonstrate LOGAN’s superior capabilities in unpaired shape transforms on a variety of examples over baselines and state-of-the-art approaches. Their study shows that LOGAN is able to learn what shape features to keep during transforms, and the results accurately resemble the desired object.

In future work, the team aims to fine tune LOGAN to work on all domain pairs to make it truly general-purpose. The current version of LOGAN also is not yet smart enough to understand the meanings of the shapes, and the researchers are working on making the network “smarter” to incorporate this information.

Zhang’s collaborators on “LOGAN: Unpaired Shape Transform in Latent Overcomplete Space” include Kangxue Yin (Simon Fraser University); Zhiqin Chen (Simon Fraser University); Hui Huang (Shenzhen University); and Daniel Cohen-Or (Tel Aviv University). For the team’s paper, visit their project page.

To register for SIGGRAPH Asia, and for more details of the event, visit http://sa2019.siggraph.org

Notes to Editors

  1. For more information on the Technical Papers Program, click here.
  2. SIGGRAPH Asia 2019 takes place at the Brisbane Exhibition and Convention Centre 17-20 November 2019. For more information, please visit https://sa2019.siggraph.org.
  3. Keep tabs of updates, download images and more at the SIGGRAPH Asia virtual newsroom
  4. Media may apply for media accreditation to SIGGRAPH Asia at bit.ly/sa19accreditation

About SIGGRAPH Asia 2019

The 12th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia (SIGGRAPH Asia 2019) will be held in Brisbane, Australia at the Brisbane Convention and Exhibition Centre (BCEC) from 17 – 20 November 2019. The annual event held in Asia attracts the most respected technical and creative people from all over the world who are excited by computer graphics research, science, art, animation, gaming, interactivity, education and emerging technologies.

The four-day conference will include a diverse range of juried programs, such as the Art Gallery / Art Papers, Computer Animation Festival, Courses, Doctoral Consortium, Emerging Technologies, Posters, Technical Briefs, Technical Papers and XR (Extended Reality). Curated programs include Business & Innovation Symposium, Demoscene and Real-Time Live! A three-day exhibition held from 18 – 20 November 2019 will offer a business platform for industry players to market their innovative products and services to the computer graphics and interactive techniques professionals and enthusiasts from Asia and beyond. For more information, please visit http://sa2019.siggraph.org. Find us on: Facebook, Twitter, Instagram and YouTube with the official event hashtag, #SIGGRAPHAsia and #SIGGRAPHAsia2019.

About ACM SIGGRAPH

The ACM Special Interest Group on Computer Graphics and Interactive Techniques is an interdisciplinary community interested in research, technology, and applications in computer graphics and interactive techniques. Members include researchers, developers, and users from the technical, academic, business, and art communities. ACM SIGGRAPH enriches the computer graphics and interactive techniques community year-round through its conferences, global network of professional and student chapters, publications, and educational activities. For more information, please visit www.siggraph.org.

About Koelnmesse

Koelnmesse Pte Ltd is one of the world's largest trade fair companies. Its more than 80 trade fairs and exhibitions have the broadest international scope in the industry, as 60 percent of the exhibitors and 40 percent of the visitors come from outside Germany. The Koelnmesse events include leading global trade fairs for 25 sectors, such as Imm Cologne, Anuga, IDS, INTERMOT, Interzum Cologne, Photokina, Gamescom, and the International Hardware Fair Cologne. Koelnmesse is ACM SIGGRAPH’s event organizer for the last 11 editions of SIGGRAPH Asia. For more information, please visit www.koelnmesse.com.sg.

Media Contacts

Illka Gobius, PINPOINT PR, illka@pinpointpr.sg | WhatsApp +65 97698370

Jamie Huang, Koelnmesse Pte Ltd, jamie.huang@siggraph.org| WhatsApp +65 92329738