particles Asymmetrical Branching Fractal network

Graph Neural Networks for Binding Affinity Prediction

AI in Drug Discovery

DNAJA1 protein, a potential target for pancreatic cancer.


  1. Intro
  2. Binding Affinity
  3. Virtual Screening
  4. Ligand parameterization
  5. Receptor parameterization
  6. Graph neural networks
  7. Example of implementation
  8. Summary

Author note: some words and definitions may be unfamiliar to a reader. Feel free to click and follow the links — you’ll land at the appropriate wiki page where you can find more information.

1. Intro

The topic relates to the application of AI and bioinformatics in drug discovery. The core purposes of the proposed AI technology [1] are:

  1. Cut expenses and duration of early drug discovery phases (target discovery and validation, lead identification and optimization). The phases are marked in red on the image below.
  2. Extend virtual screening capabilities and accuracy by rejecting dubious molecular coordinates and transition to a more efficient parameterization of (bio)molecules.
Drug Discovery process. AI technology described in the article is applied at the stages highlighted in red

To get a context of the problem and solution, let’s first consider the main concepts.

2. Binding Affinity

3.1 What is binding affinity?

Binding affinity is the strength of the binding interaction between a single biomolecule (e.g. protein or DNA) to its ligand/binding partner (e.g. drug or inhibitor).

Binding affinity is typically measured and reported by the equilibrium inhibition constant (Ki), which is used to evaluate and rank order strengths of biomolecular interactions. The smaller the Ki value, the greater the binding affinity of the ligand for its target. The larger the Ki value, the more weakly the target molecule and ligand are attracted to and bind to one another.

Binding affinity is influenced by non-covalent intermolecular interactions such as hydrogen bonding, electrostatic interactions, hydrophobic and Van der Waals forces between the two molecules. In addition, the binding affinity between a ligand and its target molecule may be affected by the presence of other molecules.

3.2 Why is it important?

Whenever you are characterizing proteins, nucleic acids, and any biomolecule, understanding the binding affinity to substrates, inhibitors, and cofactors is key to the appreciation of the intermolecular interactions driving biological processes, structural biology, and structure-function relationships.

In drug discovery, binding affinity is used to rank hits binding to the target and design drugs that bind their targets selectively.

“Selectively” means the drug must have a high affinity to the selected target and the lowest possible affinities to other targets to avoid off-target binding and caused side-effects.

3.3 How to measure it?

There are many experimental ways to measure binding affinity and inhibition constants, such as ELISAs, gel-shift assays, pull-down assays, equilibrium dialysis, analytical ultracentrifugation, surface plasmon resonance, and spectroscopic assays.

Experimental methods are expensive in terms of required human efforts, time, and resources. Due to the tremendous number of chemical compounds, experimental bioactivity screening efforts require the aid of computational approaches. A set of such approaches is called virtual screening.

3. Virtual Screening

Let’s give a definition and a brief classification for the concept:

Virtual screening is a set of computational techniques for the selection of molecules that are most likely to bind to a drug target (protein or polynucleotide).

There are two main branches of virtual screening:

2.1 Ligand-based

The 3D structure of the target is unknown. A set of geometric rules and/or physical-chemical properties (known as pharmacophore model) obtained by QSAR studies are used for the screening.

2.2 Structure-based

The 3D structure of the target is known. Target coordinates and scoring function are used to calculate the affinity of the target to ligands. Also known as molecular docking (see the gif below).

Docking of a small molecule (green) into the crystal structure of the beta-2 adrenergic G-protein coupled receptor (PDB3SN6​)

Example of a legacy virtual screening method failure for affinity evaluation

Conventional structure-based virtual screening methods are AutoDock, AutoDock Vina, Vina with modern scoring functions like RF [4].

Recently, our group tried to separate known thrombin active and inactive ligands using AutoDock, AutoDock Vina, and Vina with RF Score. Selected active and inactive molecules were established experimentally (i.e., their activity was known in advance; we rather tested virtual screening methods correspondence to reality). All ligands are available at the DUDE database web interface.

As a result, AutoDock, AutoDock Vina, and Vina RF were not able to separate active (blue bars) and inactive (red bars) ligands even with a properly set binding site.

AutoDock, AutoDock Vina, and AutoDock Vina with RF Score are unable to separate active and inactive thrombin ligands

Graph neural networks as a virtual screening tool

Within the current context, the proposed AI technique is classified as a kind of virtual screening. Let’s review how graph neural networks are used for the parameterization of molecules.

4. Ligand parameterization

Ligands are small molecules usually. Thus, the atom/chemical bond scale is appropriate. For large molecules, such as peptides, refer to the next chapter.

4.1 Atom parameterization

Each atom of a ligand is featured with mass, total and partial charges, number of radical electrons (integers); atom type, valence, hybridization, aromaticity, chirality types (one-hot encodings), etc. Atomic coordinates may be used as a feature or not (especially if unknown). Atom features are shown as a vector with three circles in the image below.

4.2 Bond parameterization

Each bond of a ligand is featured with a type (single, double, triple, aromatic), ring affiliation, whether a bond is conjugated (0/1), stereo configuration (cis-/trans-, E/Z, S/R, none), direction (upright/downright) of a bond, etc. Bond features are shown as a vector with two circles in the image below.

4.3 Intramolecular forces

Intramolecular = inside a molecule. Learned by attention mechanism. The mechanism by design should capture electronic density distribution effects within a molecule. Recursively trained attention context vector aggregates information about local neighborhoods to provide expressive representations of small molecules. In higher time steps, target node embedding will include information from further nodes recursively. A more intense (smaller) yellow circle implicates a higher attention level of the neighbor node.

A molecule of chloroform (CHCl3) parameterized as a graph with an attention mechanism

5. Receptor parameterization

Receptors are typically large biomolecules: polynucleotides (DNA, RNA) or proteins. There are different receptor parameterization techniques for graph neural networks. Let’s review some of them.

5.1 As a linear graph

A receptor can be represented as a linked list (linear graph) of amino acids (for proteins and peptides) or nucleotides (for DNA, RNA). A node, in this case, is one amino acid or one nucleotide. A node can be parameterized with such features as charge, flexibility (Smith), hydrogen bond donors/acceptors, hydrophobicity, polarity (Zimmerman), Van-der-Waals volume, etc. Edges are mostly the same (peptide bonds for proteins/peptides and alternating sugar-phosphate backbone along the polynucleotide chain) and do not require parameterization. I can only hope that kind of attention mechanism would be able to simulate/learn the secondary/tertiary structure of these biomolecules. Cartesian coordinates are not required.

5.2 As a graph of intermolecular interfaces

A protein-ligand graph is computed from the atomic coordinates in a PDB file. In this graph, vertices represent secondary structure elements (usually alpha helices and beta strands) or ligand molecules while the edges represent contacts and relative orientations between them. Atomic coordinates are a must for the case.

Computation of the protein graph from 3D atom data

5.3 As adjacency matrix

Interatomic (or inter-amino-acid, or inter-nucleic-acid) distances matrix is constructed from the Cartesian atomic coordinates. In these matrices, the rows and columns are assigned to the nodes in the network and the presence of an edge is symbolized by a numerical value.

Graphs by edge type and their adjacency matrices

It is possible to build undirected and directed neighbor matrices without Cartesian coordinates, but they are mandatory to create weighted adjacency matrices where the distance between nodes is taken into account. For weighted molecular adjacency matrices, the radial interaction cutoff (typically 12 Angstrom) is used to truncate nodes that do not interact.

5.4 As a raw FASTA string

Applicable only for NLP models, such as Transformers, GRU, LSTM, etc. In this case, raw receptor FASTA string is used, parameterization is made by the model implicitly by learning embeddings, for example. Coordinates are not required in this case.

Crystal structure of Rnd3/RhoE (1M7B). FASTA is marked in red

Node and edge featurization define the type and architecture of the receiver graph neural network.

6. Graph neural networks

This section provides a short introduction to graph neural-networks (GNN) for molecular properties prediction and outlines their categorization.

6.1 Convolutional graph neural network (Conv-GNN)

Convolutional neural networks (CNNs) are networks specialized for interacting with grid-like data, such as a 2D image. As molecules are typically not represented as 2D grids, chemists have focused on a variant of this approach: the Conv-GNN on molecular graphs.

Molecular graphs confer key advantages: they bypass the conformational challenge of using 3D representations while maintaining invariance to rotation and translation due to their pairwise definition. The MoleculeNet paper (Wu et al., 2017) offers a concise conceptual comparison of six major variants. To facilitate the following explanation, the framework of neural message passing networks put forth (Gilmer et al., 2017) is used.

Neural message passing networks utilize a convolutional layer, simply a matrix of scalar weights, to exchange information between atoms or bonds within a molecule and produce a fixed-length, real-valued vector that embeds the molecular information. To begin, they generate or compute a feature vector for each atom within the molecule. These feature vectors are then collected into a matrix. Additionally, they generate a graph topology matrix that specifies the connectivity of the graph. In a forward convolutional pass, these three matrices are multiplied together. This allows information to be exchanged between the feature vectors of each atom with its immediate neighbors, in accordance with the connectivity specified by the topology matrix. This updates each atom’s feature vector to include information about its local environment. This updated feature vector matrix is then passed through an activation function (i.e., ReLU) and can then be iteratively updated by using it as the feature matrix in another convolutional pass. This propagates information throughout the molecule. Finally, these atom feature vectors are either summed or concatenated to give a unique, learned representation of the molecule as a real-valued vector (see the figure below).

The learned representation in vector form is referred to as a representation in latent space and is then used as the input for a traditional fully connected DNN to finally make the classification or prediction. Backpropagation is once again used to train these networks by propagating gradients backward and determining how to change the convolution matrix weights and the parameters in the DNN.

Examples: GAT, GCN, AGNN, SchNetMPNN

6.2 Recurrent graph neural network (Rec-GNN)

Recurrent neural networks (RNNs), introduced by Hopfield in 1982, are specialized for dealing with sequences of arbitrary length. This makes them ideally suited to handling the textual representation of chemical information, such as FASTA and SMILES. The critical difference is that in the previous architectures each data input is distinct, while in an RNN each input will influence the next one. An illustrative example is viewing any particular input, such as a SMILES string, as time-series data. The presence of a carbon atom at one moment in time influences what the next character is likely to be. This is expressed in the architecture by feeding the output of the hidden layer for that carbon into the hidden layer of the next atom. The feeding of one hidden state into the next gives the system a recursive relationship within the hidden layer, but it can be viewed as directional by “unfolding” the network to form of an unfolded, acyclic network graph. By doing this, it maintains a history of all previous inputs, and they influence its prediction at a later time. The network can then be trained using a recursive form of backpropagation.

Examples: AttentiveFP, MGCNGPNN

Illustration of ACNN and RNN architectures for chemical applications. Colored arrows stemming from the amine group (ACNN) indicate the information transfer from the nitrogen to other heavy atoms, with the color corresponding to the convolutional pass. Light gray arrows indicate each atom’s feature vector in the matrix. Importantly, properties such as the atomic number (Z) are often encoded using one-hot vectors, which are binary, but for spatial efficiency, the integer is used in its place. The RNN model shows a simplified “many to one” recurrent network, with the text above and below the dashed lines indicating a stylized affinity prediction system. This system takes in receptor FASTA and ligand SMILES strings and predicts the affinity value.

Difference between recurrent (Rec-GNN) and convolution (Conv-GNN) based graph neural networks: Rec-GNNs apply the same set of weights (W1=W2) until a convergence criterion is met, whereas Conv-GNNs apply different weights at each iteration.

6.3 Special

These GNN architectures have several additions to the basic GNNs (Conv-GNN and Rec-GNN) like different pooling strategies, skip-connections, attention mechanisms, super-nodes, isomorphic graphs.

Examples: Weisfeiler-Lehman, Graph-Attention Transformers

7. Example of implementation

As an example, we will train Atomic Convolutional Neural Network (ACNN) by Gomes et al., 2017. The dataset is PDBBind 2015 — it contains three subsets: core (195 structures), refined (3,706 structures), and full (14,260). The target is to predict ∆G — free energy of receptor-ligand complexation, which serves as a binding affinity metric.

Diagram of Atomic Convolutions on Protein-Ligand Systems. ∆G (complex) = G (complex) − G (protein) − G (ligand)

1. Select configuration

See the full list of available configurations here.

idx = 0

2. Prepare train and test datasets

dataset, train_set, test_set = load_dataset(args)
args['train_mean'] =['device'])
args['train_std'] =['device'])
train_loader = DataLoader(dataset=train_set,
test_loader = DataLoader(dataset=test_set,

3. Load ACNN model, initialize loss function, optimizer, and early stopping callback

model = load_model(args)
loss_fn = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=args['lr'])
stopper = EarlyStopping(mode=args['mode'],

4. Train until early stopping

for epoch in range(args['num_epochs']):
    run_a_train_epoch(args, epoch, model, train_loader, loss_fn, optimizer)
    test_scores = run_an_eval_epoch(args, model, test_loader)
    test_msg = update_msg_from_scores('test results', test_scores)
    early_stop = stopper.step(test_scores['mae'], model)
    if early_stop:
        print('Early stopping')

5. Results and discussion

With the default (ACNN_PDBBind_core_pocket_random) configuration, ACNN achieves 1.5006 MAE, 0.1461 R2.

epoch 12/500, training | loss 0.7162, r2 0.2815, mae 1.6027
test results, r2 0.1461, mae 1.5006

In terms of the task, 1.5006 MAE means that the mean average error of the model is ~1.5 kcal/mol of Gibbs free energy of binding affinity. It is precise enough, compared to the energy of water-water hydrogen bond O−H···:O (21 kJ/mol or 5.0 kcal/mol).

See full implementation on GitHub.

8. Summary

  1. Challenges. Predicting drug-target interactions is crucial for novel drug discovery, drug repurposing, and uncovering off-target effects. Experimental bioactivity screening takes significant time (1–3 years) and expense (more than 100 million USD on average per new drug-on-market) but has low efficiency. Bioassays are typically backed by computational methods, but legacy simulations fail to deliver either sufficient precision — like in the example of AutoDock Vina with modern RF Score which failed to separate active and inactive thrombin ligands — or sufficient speed — like in the example of molecular dynamics or first-principle quantum mechanics simulations. As a result, more than 90% of the proposed leads are declined (He et al., 2017).
  2. Solution. In silico methods are highly demanded since they can expedite the drug development process by systemically suggesting a new set of candidate molecules promptly, which saves time and reduces the cost of the whole process by up to 43% (DiMasi et al., 2016). Graph neural networks deliver superior accuracy for the task in a matter of milliseconds per receptor-ligand pair and extend docking capabilities by accepting structures without coordinates.
  3. Results. The ACNN model easily achieves ~1.5 kcal/mol MAE predicting Gibbs free energy of binding affinity. This repository contains the full implementation.
  4. Philosophy. Bio-/cheminformatics is now on the edge of a similar paradigm shift as computer vision before the deep learning model AlexNet won the 2012 ImageNet contest. Instead of selecting manually crafted features for molecules, integrative features are learned by optimization methods.


  1. Gurbych O., Druchok M., Yarish D., Garkot S., “High throughput screening with machine learning”, presented at NeurIPS 2020
  2. Wang et al. (2005) The PDBbind database: methodologies and updates. J Med Chem 16;48(12):4111–9.
  3. Gomes et al. (2017) Atomic Convolutional Networks for Predicting Protein-Ligand Binding Affinity. arXiv:1703.10603.
  4. Li H. et. al, “Improving AutoDock Vina Using Random Forest: The Growing Accuracy of Binding Affinity Prediction by the Effective Exploitation of Larger Data Sets”, Molecular Informatics 34(2), February 2015
  5. Adam C. et. al, “Deep Learning in Chemistry”, J. of Chem. Inf. Model. 2019, 59 (6), 2545–2559

Graph Neural Networks for Binding Affinity Prediction was originally published in The Startup on Medium, where people are continuing the conversation by highlighting and responding to this story.

Related Articles

Big pharma is using AI and machine learning in drug discovery and development to save lives

Summary List PlacementThe pharmaceutical industry has been slow-moving when it comes to adopting digital health technology, and pharma companies overall have taken a long time to implement AI and machine learning strategies — making broad-scale digital transformation difficult.

There is ample opportunity for drug discovery and development, but it relies on the ability of companies to implement advanced health tech into everyday strategies. 
While the healthcare industry is rapidly adopting digital tech, the pharma industry is lagging on digital maturity, and any measures even early movers are taking to catch up are patchworked due to a lack of strategy and digital-focused leadership.
AI & Machine Learning in the Drug Development Process
An incredible amount of time and money goes into drug development — bringing a drug to market costs about $2.8 billion dollars over 12+ years, according to Taconic Biosciences’ tally.  
Utilizing AI and machine learning can help at every stage of the drug discovery process. Healthcare AI startups were able to raise over  $2 billion in Q3 2020, and those using AI to streamline the drug making process were the recipients of some of the heftiest sums compared with startups deploying the tech in other healthcare segments.
AI in Drug Discovery (Phase I)
The drug discovery process ranges from reading and analyzing already existing literature, to testing the ways potential drugs interact with targets. According to Insider Intelligence’ AI in Drug Discovery and Development report, AI could curb drug discovery costs for companies by as much as 70%.
AI in Preclinical Development (Phase 2)
The preclinical development phase of drug discovery involves testing potential drug targets on animal models. Utilizing AI during this phase could help trials run smoothly and enable researchers to more quickly and successfully predict how a drug might interact with the animal model.
AI in Clinical Trials (Phase 3)
After making it through the preclinical development phase, and receiving approval from the FDA, researchers begin testing the drug with human participants. Overall, this is a four-phase process and usually considered the longest and most expensive stage in the drug making journey. 
AI can facilitate participant monitoring during clinical trials—generating a larger set of data more quickly—and aid in participant retention by personalizing the trial experience. 
Pharma Investments in AI
Big tech investments in pharma are at an all time high. Specifically, big tech firms with a broad range of AI and cloud solutions make valuable partners to drugmakers, which have varied needs when it comes to AI.

For example, Moderna leverages Amazon’s AWS cloud platform to speed up its drug development process. And while Moderna has recently made headlines as a top contestant in the race to develop a coronavirus vaccine, the company should also be recognized for its success in developing a cancer vaccine in just 40 days while leaning on AWS. 
Moderna is just one example of the many pharma companies taking advantage of Big Tech’s growing interest in the digital health industry. And Insider Intelligence expects Big Tech to continue using their AI brawn to forge pharma tie-ups.
Here are the companies analyzed in the report:

Eli Lilly
Litmus Health
Recursion Pharmaceuticals

Interested in getting the full report? Here’s how you can gain access:

Join other Insider Intelligence clients who receive this report, along with thousands of other Digital Health forecasts, briefings, charts, and research reports to their inboxes. > > Become a Client
Purchase the individual report from our store. > > Buy The Report Here
Join the conversation about this story »

How AI could transform post-pandemic healthcare

Summary List PlacementThe toll on medical professionals during the COVID-19 pandemic has been huge. In the UK, nearly 100,000 members of the National Health Service (NHS) workforce are currently off sick – around one in 10 employees. Half of those are absent because they are sick themselves, or have been forced to self-isolate because of their proximity to someone with COVID-19.
The absences are stretching healthcare provision in the country to its limits. But it’s not just frontline doctors and nurses who are struggling to keep going due to illness. Human labor has been pushed and pulled due to the pandemic in pharmacies and laboratories processing tests and coming up with new drugs. It’s something Joanna Shields, the CEO of BenevolentAI, an artificial intelligence-powered drug discovery startup, has been working on.
“The coronavirus pandemic reinforced how human intelligence partnered with purposeful technology can achieve inspirational results, even when the world is locked down,” she told Insider. “While AI models and algorithms will never fully replace scientists and clinicians, they can save time and money — which is crucial in our current climate.”
In the early days of the pandemic, BenevolentAI set its technology to work on the pandemic, trying to come up with treatments that could help alleviate pressure on medical systems. “Our AI models ingest scientific literature at scale, deriving contextual relationships between genes, diseases, drugs, and biological pathways leading to the proposal of novel or optimal drug targets and mechanisms, as well as the identification of the patients who will respond to treatment,” Shields said. “Such relationships may be completely new, previously or previously unrecognized due to the overwhelming volume of biomedical information that is now available.”
One solution they hit upon by combing through the literature was the use of one drug, barcitinib from Eli Lilly, that could help treat COVID patients. A November 2020 paper published in Science Advances by frontline doctors who took the signals from the machine learning trawl through literature and decided to test it on patients through the National Centre for Allergy and Infectious Diseases (NIAID) in the US reported positive results. The drug, identified by BenevolentAI, contributed to a 71% reduction in mortality in patients with moderate to severe COVID-19.
The Food and Drug Administration (FDA) in the US used those results to grant it emergency use authorization on a wider scale, and it’s been used in hospitals there since the end of 2020. In the UK, barcitinib is currently undergoing tests as part of the UK Recovery Trial, where patients recuperating from COVID-19 are being given various drugs to see how best to treat future ill patients.
Quick action like this could transform post-pandemic healthcare, Shields thinks. “Experiencing a global health crisis on this scale, we have never been more aware of the fragility of human life,” she said. “One positive outcome of COVID-19 is that it has united science and tech for good, accelerating data-sharing agreements and encouraging the open publication of research results. This new environment of collaboration has provided a glimpse of the beginnings of a more open and adaptable R&D model that can accelerate the delivery of innovative and life-changing outcomes for patients.”
It’s also having an impact on those being drawn to participate and collaborate on finding solutions for healthcare problems that are blighting the world. In a trying time, Shields believes pharma- and med-tech have stepped up – and that success could draw more people into the field who can help cause future leaps forward. “We have also seen a significant increase in tech talent being drawn to healthcare or pharmaceuticals, driven by a desire to solve real-world problems and improve quality of life,” she said. “I believe that this passion and intelligence, partnered with new technologies, will propel us forward and bring new discoveries, new cures, and new hope to patients.”
It’s one that’s proven more necessary than ever before — and while AI has come under its fair share of criticism, there’s real hope, based on its use in this pandemic, that it could be harnessed for good by the time the next pandemic comes.SEE ALSO: Vaccine experts report that the rapid progress on COVID-19 trials is a result of unprecedented global cooperation and focus
SEE ALSO: The coronavirus pandemic disrupted clinical trials. A top ALS researcher explains how that helps the work she’s doing.
Join the conversation about this story »

A biotech known for its sleep and brain drugs just made a $7.2 billion bet that medical cannabis is crucial to its neuroscience ambitions

Summary List PlacementJazz Pharmaceuticals is making a $7.2 billion bet on the future of medical cannabis. 
The drugmaker said on Wednesday it will acquire GW Pharma, the maker of CBD-based epilepsy treatment Epidiolex, in a cash-and-stock deal. It’s the largest cannabis-industry deal so far. 
The deal, in which Jazz Pharmaceuticals will pay $200 in cash and $20 in Jazz shares for each GW share, represents a 50% premium on GW’s Tuesday closing price — a strong signal that the pharmaceutical giant is bullish on the future of cannabis-based medicines. 
The news sent stocks in the cannabis sector soaring on Wednesday morning, with GW shares surging over 46%. The deal is expected to close in the second quarter of this year. 
Analysts from the investment bank Stifel called the deal an “outstanding outcome” for GW Pharma in a Wednesday morning note. 
In 2018, GW was the first company to receive FDA approval for a cannabis-based drug, Epidiolex. The drug, which is essentially an ultra-high dose of cannabidiol, or CBD, is designed to treat seizures linked with two rare forms of childhood epilepsy, Lennox-Gastaut syndrome and Dravet syndrome.
Cannabis is considered a Schedule I drug by the US federal government, a class of drugs reserved for those with no accepted medical use and a high potential for abuse. Regulators moved Epidiolex to Schedule 5, similar to codeine-containing cough syrup. 
While CBD has since become a widely used consumer product, appearing in everything from lattes to skincare products, Epidiolex is so far the only FDA-approved cannabis-derived drug. 
A ‘new era’ for cannabis-based drugs
Jazz CEO Bruce Cozadd said Epidiolex has “near-term blockbuster potential” on a Wednesday morning call with investors, adding that neuroscience is a key focus area for the company.
“We think this is just the beginning for Epidiolex,” GW Pharma CEO Justin Gover said on the call. He added that what GW Pharma has been able to demonstrate over its 20-year history is that cannabinoids — the active compounds in the cannabis plant — are “real and compelling science.”
“We have potential first-in-class candidates across disease states such as autism, schizophrenia, and other neuropsychiatry targets,” Gover said. 
Getting a cannabis-based drug approved in the US was an uphill battle, GW Pharma CEO Justin Gover told Insider in an interview in 2019. He called Epidiolex’s approval the start of a “new era” for medical marijuana.
“If one applies the same rigorous standards to cannabis as they do to other drugs, they should be able to get a drug approved,” Gover said.

GW is also conducting Phase III trials for Nabiximols, a potential multiple sclerosis drug containing both THC, the main psychoactive component in cannabis, as well as CBD. The company said in the Wednesday investor call that it expects to submit a new drug application (NDA) to the FDA in the next one or two years. Nabiximols is already approved as Sativex in countries outside the US. 
The FDA has approved another THC-containing drug, AbbVie’s Marinol, which treats nausea, vomiting, and lack of appetite associated with chemotherapy and AIDs treatment. That drug contains a synthetic version of THC.
Analysts from the investment bank SVB Leerink said the deal is an “interesting strategic fit” with Jazz’s neurosciences focus, and “adds a platform of innovative cannabinoid product candidates.” 
Goldman Sachsand Centerview Partners served as financial advisors to GW, and Cravath, Swaine & Moore LLP and Slaughter and May provided legal advice. Evercore and Guggenheim served as lead financial advisors to Jazz Pharmaceuticals, which also received advice from BofA Securities and J.P. Morgan Securities LLC. 
Wachtell, Lipton, Rosen & Katz, Macfarlanes LLP and Arthur Cox LLP served as Jazz’s legal advisors. 
The cannabis industry is on a dealmaking tear, a stark reversal from last year 
The GW Pharma deal is the latest blockbuster tie-up for the rejuvenated cannabis industry, which has been buoyed by hopes that a Democratically-controlled Senate and White House will lead to relaxed regulations, and perhaps in the most optimistic scenarios, full-scale federal decriminalization or legalization.
To be sure, much of the US cannabis industry exists in a legal gray area, because many of its products are federally illegal. That isn’t the case for GW Pharma.
Many cannabis investors, analysts and other experts told Insider they predicted rapid consolidation in 2021, in a bid for scale as investors return to the industry after a horrid 2020. 
In December, Canadian cannabis heavyweights Tilray and Aphria agreed to merge in a deal that would give the combined companies a near $4 billion valuation. And the biggest US cannabis companies, like Curaleaf, TerrAscend, and Cresco Labs, among others, raised nearly $1 billion in January to fuel dealmaking. 
Nawan Butt, the portfolio manager of the European Medical Cannabis and Wellness UCITS ETF said he expects dealmaking on the medical side of the cannabis industry to ramp up.
“Today’s events should draw attention to other companies doing work in this sector and the opportunities they are exploring,” he added in an email.Join the conversation about this story » NOW WATCH: A top economist explains how weighted voting could change democracy

Take a look at what technologies retailers are introducing to revamp the in-store experience

Summary List PlacementIn the US, store closures are at an all-time high. 

With 88% of total sales, brick-and-mortar is still the dominant driver of retail spend in the country, but in-store earnings aren’t growing fast enough to keep the doors open.
In an effort to boost revenue, physical retailers are rethinking their approach to in-store product discovery, and they’re betting on technology to help them accomplish their goals.
In The Future of Retail: In-Store Experience slide deck, Business Insider Intelligence looks at the technologies physical retailers are introducing ‐ like sensor networks and dynamic screens ‐ to revamp in-store product discovery and drive sales.
This exclusive slide deck can be yours for FREE today.
Simply click here to enter your email address and obtain a FREE preview of the Digital Trust Report!Join the conversation about this story »


Your email address will not be published. Required fields are marked *

Receive the latest news

Subscribe To Our Weekly Newsletter

Get notified about chronicles from TreatMyBrand directly in your inbox