#6107. Developing a Generative Model Utilizing Self-attention Networks: Application to Materials/Drug Discovery

October 2026publication date
Proposal available till 10-05-2025
4 total number of authors per manuscript0 $

The title of the journal is available only for the authors who have already paid for
Journal’s subject area:
Computer Science Applications;
Organic Chemistry;
Drug Discovery;
Structural Biology;
Molecular Medicine;
Places in the authors’ list:
place 1place 2place 3place 4
FreeFreeFreeFree
2350 $1200 $1050 $900 $
Contract6107.1 Contract6107.2 Contract6107.3 Contract6107.4
1 place - free (for sale)
2 place - free (for sale)
3 place - free (for sale)
4 place - free (for sale)

Abstract:
A new generative model, in which the Variational Autoencoder network is combined with the Transformer architecture, is developed. The proposed model, the Variational Autoencoding Transformer (VAT), is applied to the task of generating molecules, showing that, with proper training, the VAT model can not only produce similar molecules to input ones with high accuracy but also generate new molecules from a predefined prior almost perfectly. A desirable aspect of our VAT is that no heuristic setting is necessary for optimal performance, which suggests that the model can readily be available to a variety of datasets. As practical directions toward materials/drug discovery, two strategies: a fine-tuning method for directed molecular generation and a method of mixing molecules in the latent space, are demonstrated.
Keywords:
Generative model; Transformer; Variational Autoencoder

Contacts :
0