top of page

Group

Public·11 members

Novel AI Hypernetworks.zip [PORTABLE]


Low-Rank Adaptation (LoRA) is a novel technique introduced by Microsoft in 2021 for fine-tuning large language models (LLMs). LoRA is an efficient adaptation strategy that introduces no additional inference latency and substantially reduces the number of trainable parameters for downstream tasks while maintaining model quality.




Novel AI Hypernetworks.zip


DOWNLOAD: https://www.google.com/url?q=https%3A%2F%2Fvittuv.com%2F2udEPD&sa=D&sntz=1&usg=AOvVaw2tEX76qoaWlVOJGwWFBW9h



Overcoming this critical limitation, my lab has presented beyond-crossbar deep learning architectures using a novel gated-memristive device, i.e., a memtransistor. Unlike a typical crossbar which can only operate input-weight products in one dimension at a time, our architecture can process products in multiple dimensions in parallel. Therefore, higher-order multiplicative interactions in emerging DNN layers such as self-attention and hyper networks are effortlessly performed by our system. Even more, we have shown that gate controls of the novel device, memtransistor, can also be used for on-chip continuous learning.


To accelerate graph processing, we have shown probabilistic spin graphs. Probabilistic spin graphs operate using a novel straintronic magnetic tunneling junction (sMTJ). sMTJ switches via voltage-control magnetic anisotropy. The networks of sMTJs possess two critical properties for graph processing. Their switching probability can be controlled by the gate voltage. And, two or more sMTJs can be coupled with a programmable correlation coefficients to encode conditional probabilities. Moreover, the networks of straitronic MTJs switch in a pico-second time-frame for extremely high throughput and scalable inference. National Science Foundations (NSF) currently funds this research.


Another top-1 model is BossNAS. BossNAS22 (Block-wisely Self-supervised Neural Architecture Search) adopts a novel self-supervised representation learning scheme called ensemble bootstrapping. The authors first factorize the search space into blocks. It is worth mentioning that the original work focuses only on vision models and uses a combination of CNN and transformer blocks.


In this paper we develop a novel methodology to simultaneously optimize locations and designs for a set of new facilities facing competition from some preexisting facilities. Known as the Competitive Facility Location and Design Problem (CFLDP), this model was previously only solvable when a limited number of design scenarios was pre-specified. Our methodology removes this limitation and allows for solving of much more realistic models. The results are illustrated with a small case study.


We develop a novel entirely encoder-based two-stage StyleGAN inversion method, which can obtain the superiority of reconstruction quality, editability, and fast inference at the same time by employing the idea of hypernetworks.


Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence. Emerging 3D-structured neural scene representations are a promising approach to 3D scene understanding. In this work, we propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field parameterized via a neural implicit representation. Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or volumetric based renderers in 3D-structured neural scene representations. In the setting of simple scenes, we leverage meta-learning to learn a prior over LFNs that enables multi-view consistent light field reconstruction from as little as a single image observation. This results in dramatic reductions in time and memory complexity, and enables real-time rendering. The cost of storing a 360-degree light field via an LFN is two orders of magnitude lower than conventional methods such as the Lumigraph. Utilizing the analytical differentiability of neural implicit representations and a novel parameterization of light space, we further demonstrate the extraction of sparse depth maps from LFNs.


Unintuitevly, LFNs do not only encode the appearance of the underlying 3D scene, but also its geometry. Our novel parameterization of light fields via the mathematically convenient plucker coordinates, together with the unique properties of Neural Implicit Representations, allows us to extract sparse depth maps of the underlying 3D scene in constant time, without ray-marching! We achieve this by deriving a relationship of an LFN's derivatives to the scene's geometry: at a high level, the geometry of the levelsets of the 4D light field encode the geometry of the underlying scene, and these levelsets can be efficiently accessed via automatic differentiation. This is in contrast to 3D-structured representations, which require ray-marching to extract any representation of the scene's geometry. Below, we show sparse depth maps extracted from LFNs that were trained to represent simple room-scale environments.


An essential step in the discovery of new drugs and materials is the synthesis of a molecule that exists so far only as an idea to test its biological and physical properties. While computer-aided design of virtual molecules has made large progress, computer-assisted synthesis planning (CASP) to realize physical molecules is still in its infancy and lacks a performance level that would enable large-scale molecule discovery. CASP supports the search for multi-step synthesis routes, which is very challenging due to high branching factors in each synthesis step and the hidden rules that govern the reactions. The central and repeatedly applied step in CASP is reaction prediction, for which machine learning methods yield the best performance. We propose a novel reaction prediction approach that uses a deep learning architecture with modern Hopfield networks (MHNs) that is optimized by contrastive learning. An MHN is an associative memory that can store and retrieve chemical reactions in each layer of a deep learning architecture. We show that our MHN contrastive learning approach enables few- and zero-shot learning for reaction prediction which, in contrast to previous methods, can deal with rare, single, or even no training example(s) for a reaction. On a well established benchmark, our MHN approach pushes the state-of-the-art performance up by a large margin as it improves the predictive top-100 accuracy from 0.8580.004 to 0.9590.004. This advance might pave the way to large-scale molecule discovery.


Without any means of interpretation, neural networks that predict molecular properties and bioactivities are merely black boxes. We will unravel these black boxes and will demonstrate approaches to understand the learned representations which are hidden inside these models. We show how single neurons can be interpreted as classifiers which determine the presence or absence of pharmacophore- or toxicophore-like structures, thereby generating new insights and relevant knowledge for chemistry, pharmacology and biochemistry. We further discuss how these novel pharmacophores/toxicophores can be determined from the network by identifying the most relevant components of a compound for the prediction of the network. Additionally, we propose a method which can be used to extract new pharmacophores from a model and will show that these extracted structures are consistent with literature findings. We envision that having access to such interpretable knowledge is a crucial aid in the development and design of new pharmaceutically active molecules, and helps to investigate and understand failures and successes of current methods.


While drug combination therapies are a well-established concept in cancer treatment, identifying novel synergistic combinations is challenging due to the size of combinatorial space. However, computational approaches have emerged as a time- and cost-efficient way to prioritize combinations to test, based on recently available large-scale combination screening data. Recently, Deep Learning has had an impact in many research areas by achieving new state-of-the-art model performance. However, Deep Learning has not yet been applied to drug synergy prediction, which is the approach we present here, termed DeepSynergy. DeepSynergy uses chemical and genomic information as input information, a normalization strategy to account for input data heterogeneity, and conical layers to model drug synergies.DeepSynergy was compared to other machine learning methods such as Gradient Boosting Machines, Random Forests, Support Vector Machines and Elastic Nets on the largest publicly available synergy dataset with respect to mean squared error. DeepSynergy significantly outperformed the other methods with an improvement of 7.2% over the second best method at the prediction of novel drug combinations within the space of explored drugs and cell lines. At this task, the mean Pearson correlation coefficient between the measured and the predicted values of DeepSynergy was 0.73. Applying DeepSynergy for classification of these novel drug combinations resulted in a high predictive performance of an AUC of 0.90. Furthermore, we found that all compared methods exhibit low predictive performance when extrapolating to unexplored drugs or cell lines, which we suggest is due to limitations in the size and diversity of the dataset. We envision that DeepSynergy could be a valuable tool for selecting novel synergistic drug combinations.DeepSynergy is available via www.bioinf.jku.at/software/DeepSynergy.Supplementary data are available at Bioinformatics online.


Targeted next-generation-sequencing (NGS) panels have largely replaced Sanger sequencing in clinical diagnostics. They allow for the detection of copy-number variations (CNVs) in addition to single-nucleotide variants and small insertions/deletions. However, existing computational CNV detection methods have shortcomings regarding accuracy, quality control (QC), incidental findings, and user-friendliness. We developed panelcn.MOPS, a novel pipeline for detecting CNVs in targeted NGS panel data. Using data from 180 samples, we compared panelcn.MOPS with five state-of-the-art methods. With panelcn.MOPS leading the field, most methods achieved comparably high accuracy. panelcn.MOPS reliably detected CNVs ranging in size from part of a region of interest (ROI), to whole genes, which may comprise all ROIs investigated in a given sample. The latter is enabled by analyzing reads from all ROIs of the panel, but presenting results exclusively for user-selected genes, thus avoiding incidental findings. Additionally, panelcn.MOPS offers QC criteria not only for samples, but also for individual ROIs within a sample, which increases the confidence in called CNVs. panelcn.MOPS is freely available both as R package and standalone software with graphical user interface that is easy to use for clinical geneticists without any programming experience. panelcn.MOPS combines high sensitivity and specificity with user-friendliness rendering it highly suitable for routine clinical diagnostics. 041b061a72


About

Welcome to the group! You can connect with other members, ge...
Group Page: Groups_SingleGroup
bottom of page