A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Mixed-Phoneme BERT: Improving BERT with Mixed Phoneme and Sup-Phoneme Representations for Text to Speech
[article]
2022
arXiv
pre-print
Experiment results demonstrate that our proposed Mixed-Phoneme BERT significantly improves the TTS performance with 0.30 CMOS gain compared with the FastSpeech 2 baseline. ...
In this paper, we propose MixedPhoneme BERT, a novel variant of the BERT model that uses mixed phoneme and sup-phoneme representations to enhance the learning capability. ...
Therefore, a mixed representation of phoneme and sup-phoneme sequences is proposed for pre-training of the Mixed-Phoneme BERT. ...
arXiv:2203.17190v1
fatcat:m24dsspw6vbyfaym3xr6ygmnpq
Mix-Automatic Sequences
[chapter]
2013
Lecture Notes in Computer Science
In this paper we compare the class of mix-automatic sequences with the class of morphic sequences. For every polynomial ϕ we construct a mix-automatic sequence whose subword complexity exceeds ϕ. ...
Mix-automatic sequences form a proper extension of the class of automatic sequences, and arise from a generalization of finite state automata where the input alphabet is state-dependent. ...
Eventually, we introduce mix-automatic sequences as sequences that are generated by mix-DFAOs. Deterministic Finite State Automata with State-Dependent Input Alphabet. ...
doi:10.1007/978-3-642-37064-9_24
fatcat:5gtdnfck3nhrrhztne66yumuyi
Representation Mixing for TTS Synthesis
[article]
2018
arXiv
pre-print
We demonstrate a simple method for combining multiple types of linguistic information in a single encoder, named representation mixing, enabling flexible choice between character, phoneme, or mixed representations ...
Recent character and phoneme-based parametric TTS systems using deep learning have shown strong performance in natural speech generation. ...
REPRESENTATION MIXING DESCRIPTION The input to the system consists of one data sequence, l j , and one mask sequence, m. ...
arXiv:1811.07240v2
fatcat:o5z3i7jfpvfd7molvk5jblradm
Identification of DNA-Binding Proteins Using Mixed Feature Representation Methods
2017
Molecules
to represent the protein sequences and improve feature representation ability. ...
The mixed-feature representation method is evaluated using 10-fold cross-validation and a test set. ...
This paper uses mixed feature representation with the best performance according to the experimental results. The method entails the following main steps. ...
doi:10.3390/molecules22101602
pmid:28937647
fatcat:msu3vorwlzd5lmwppuuzw7go44
Augmenting Traditional Conceptual Models to Accommodate XML Structural Constructs
[chapter]
2007
Lecture Notes in Computer Science
Thus, there is a need to enrich traditional conceptual models with new XML Schema features. ...
We argue that our solution can be adapted generally for traditional conceptual models and show how it can be adapted for two popular conceptual models. ...
In the appendix of this chapter we provide formal representations for the added features for C-XML: sequence, choice, mixed content, and general co-occurrence constraints. -Structure independence. ...
doi:10.1007/978-3-540-75563-0_35
fatcat:ekqcffbwcjfgdmbkvlahzazkli
Deep Generative Networks For Sequence Prediction
[article]
2018
arXiv
pre-print
sequences by decoupling the static input representation from the recurrent sequence representation. ...
This thesis investigates unsupervised time series representation learning for sequence prediction problems, i.e. generating nice-looking input samples given a previous history, for high dimensional input ...
of both the input and sequence spaces. ...
arXiv:1804.06546v1
fatcat:ypodcdkbmrfuna7nco2tntsrgu
STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation
[article]
2022
arXiv
pre-print
Specifically, we mix up the representation sequences of different modalities, and take both unimodal speech sequences and multimodal mixed sequences as input to the translation model in parallel, and regularize ...
How to learn a better speech representation for end-to-end speech-to-text translation (ST) with limited labeled data? ...
When p * = 0.0, the translation task with the mixed sequence as input degrades to the MT task. ...
arXiv:2203.10426v1
fatcat:6engekf5gbgzfehqqzbdv4h4uq
CheMixNet: Mixed DNN Architectures for Predicting Chemical Properties using Multiple Molecular Representations
[article]
2018
arXiv
pre-print
SMILES is a linear representation of chemical structures which encodes the connection table, and the stereochemistry of a molecule as a line of text with a grammar structure denoting atoms, bonds, rings ...
In this work, we present CheMixNet -- a set of neural networks for predicting chemical properties from a mixture of features learned from the two molecular representations -- SMILES as sequences and molecular ...
We demonstrate that by using a mixed deep learning approach, we can leverage the features of both sequence and fingerprint representations and achieve much better results, even with only a few hundred ...
arXiv:1811.08283v2
fatcat:zigst4l7szhf5dzophrqmcqt2m
The Utility of Data Transformation for Alignment, De Novo Assembly and Classification of Short Read Virus Sequences
2019
Viruses
Advances in DNA sequencing technology are facilitating genomic analyses of unprecedented scope and scale, widening the gap between our abilities to generate and fully exploit biological sequence data. ...
Our results show that the use of highly compressed sequence approximations can provide accurate results, with analytical performance retained and even enhanced through appropriate dimensionality reduction ...
WGSIM was used to generate 4 mixed virus datasets with different levels of variation. ...
doi:10.3390/v11050394
pmid:31035503
pmcid:PMC6563281
fatcat:z34h6xhofbdozf27emeqc46qhq
Ergodic theorems for affine actions of amenable groups on Hilbert space
[article]
2012
arXiv
pre-print
We use Theorem A to deduce that any affine action G^T H on Hilbert space with weakly mixing linear part admits a sequence of almost fixed points (Theorem B). ...
We prove a new weak mean ergodic theorem (Theorem A) for 1-cocycles associated to weakly mixing representations of amenable groups. ...
Let G be a finitely generated, discrete group which admits a controlled Følner sequence. Let b : G → H be a 1-cocycle associated to a weakly mixing orthogonal representation π. ...
arXiv:1207.5888v2
fatcat:bygig6vkmrbr3ncc2ljfrica2e
Mixed Pooling Multi-View Attention Autoencoder for Representation Learning in Healthcare
[article]
2019
arXiv
pre-print
To this end, in this paper we propose a novel unsupervised encoder-decoder model, namely Mixed Pooling Multi-View Attention Autoencoder (MPVAA), that generates patient representations encapsulating a holistic ...
Additionally, a mixed pooling strategy is incorporated in the encoding step to learn diverse information specific to each data modality. ...
By augmenting this multi-head self-attention mechanism with a mixed pooling multi-view strategy, it further helps the model to associate heterogeneous medical information with each patient to generate ...
arXiv:1910.06456v1
fatcat:3vuj64gjsngmnlouirykrqq2vq
Applying of novel subtraction method Genetically Directed Differential Subtraction Chain (GDDSC) in plant genomes
2011
Nature Precedings
The newly identified tags, obtained by GDDSC represent pools of candidate genes and other sequences, which could serve as potential markers for requested traits. ...
Mix the tester and driver representations in ratio 1:1, overlay each representation with two drops of mineral oil, even if you use thermocykler with heated lid. ...
Enhancing the number of rounds in GDDSC, the subtracted mix become more saturated in polymorphic sequences. ...
doi:10.1038/npre.2011.5465.2
fatcat:zzbtqxn3xnggvham52dhwuk33q
Applying of novel subtraction method Genetically Directed Differential Subtraction Chain (GDDSC) in plant genomes
2011
Nature Precedings
The newly identified tags, obtained by GDDSC represent pools of candidate genes and other sequences, which could serve as potential markers for requested traits. ...
Mix the tester and driver representations in ratio 1:1, overlay each representation with two drops of mineral oil, even if you use thermocykler with heated lid. ...
Enhancing the number of rounds in GDDSC, the subtracted mix become more saturated in polymorphic sequences. ...
doi:10.1038/npre.2010.5465
fatcat:zywg7cgf7nfxjje4zgd2mtczr4
A new generator of chaotic bit sequences with mixed-mode inputs
[article]
2017
arXiv
pre-print
This paper presents a new generator of chaotic bit sequences with mixed-mode (continuous and discrete) inputs. ...
The obtained sequences of chaotic bits show some features of random processes with increased entropy levels, even in the cases of small numbers of bit representations. ...
In order to secure a much wider diversity in creating chaotic sequences with excellent randomness features, we examined how the sequences obtained from our mixed-mode gener- with s5 in our mixed-mode generator ...
arXiv:1712.09321v1
fatcat:wmotnhaprvdzjfvawl3pzuincq
Spectral theory of dynamical systems
[article]
2020
arXiv
pre-print
In [200] , the analytic case is considered leading to a "generic" result on disjointness with the ELF class generalizing the classical Shklover's result on the weak mixing property [273] . A. ...
associated with Cartesian products T ×k for a generic transformation T . ...
arXiv:2006.11616v1
fatcat:mvuhnfdwdvgv7dleo4hzy6wphi
« Previous
Showing results 1 — 15 out of 578,354 results