A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is
Lecture Notes in Computer Science
Illumination variation poses a serious problem in video shot detection. It causes false cuts in many shot detection algorithms. A new illumination invariant measure metric is proposed in this paper. The metric is based on the assumption: the outputs of derivative filters to log-illumination are sparse. Thus the outputs of derivative filters to log-image are mainly caused by the scene itself. If the total output is larger than a threshold, it can be declared as a scene change or a shot boundary.doi:10.1007/978-3-540-45080-1_158 fatcat:zcbviy4ef5bvzdlvscmxazcuzy
more »... Although this metric can detect gradual transitions as well as cuts, it is applied as a post-process procedure for a cut candidate because an illumination change is usually declared as a false cut.
This paper investigated the uplink of multi-user massive multi-input multi-output (MIMO) systems with a mixed analog-to-digital converter (ADC) receiver architecture, in which some antennas are equipped with costly full-resolution ADCs and others with less expensive low-resolution ADCs. A closed-form approximation of the achievable spectral efficiency (SE) with the maximum-ratio combining (MRC) detector is derived. Based on this approximated result, the effects of the number of base stationarXiv:1608.00352v1 fatcat:twjfjxcvcfafbgbxxm3b6zaoq4
more »... nnas, the transmit power, the proportion of full-resolution ADCs in the mixed-ADC structure, and the number of quantization bits of the low-resolution ADCs are revealed. Results showcase that the achievable SE increases with the number of BS antennas and quantization bits, and it converges to a saturated value in the high user power regime or the full ADC resolution case. Most important, this work efficiency verifies that for massive MIMO, the mixed-ADC receiver with a small fraction of full-resolution ADCs can have comparable SE performance with the receiver with all full-resolution ADCs but at a considerably lower hardware cost.
Lecture Notes in Computer Science
The hardness of the learning with errors (LWE) problem is one of the most fruitful resources of modern cryptography. In particular, it is one of the most prominent candidates for secure post-quantum cryptography. Understanding its quantum complexity is therefore an important goal. We show that under quantum polynomial time reductions, LWE is equivalent to a relaxed version of the dihedral coset problem (DCP), which we call extrapolated DCP (eDCP). The extent of extrapolation varies with the LWEdoi:10.1007/978-3-319-76581-5_24 fatcat:7pnqjrajbfcyro4zqsc3dqwpcu
more »... noise rate. By considering different extents of extrapolation, our result generalizes Regev's famous proof that if DCP is in BQP (quantum poly-time) then so is LWE (FOCS 02). We also discuss a connection between eDCP and Childs and Van Dam's algorithm for generalized hidden shift problems (SODA 07). Our result implies that a BQP solution for LWE might not require the full power of solving DCP, but rather only a solution for its relaxed version, eDCP, which could be easier.
The study aims to present our experience of the clinical course and management of deep neck infection and try to determine if the characteristics of this kind of infection were similar between the children and adults in southern China. Patients diagnosed with deep neck infection in the Division of Otolaryngology in the First Affiliated Hospital of Sun Yat-sen University between January 2002 and December 2011 were screened retrospectively for demographic characteristics, presenting symptoms,doi:10.1097/md.0000000000000994 pmid:26166132 pmcid:PMC4504584 fatcat:ehtjv3s6vzajlgn7spt5qig6qy
more »... biotic therapy before admission, the history of antibiotics abuse, leucocyte count, etiology, bacteriology, disease comorbidity, imaging, treatment, complications, and outcomes. One hundred thirty patients were included and 44 (33.8%) were younger than 18 years old (the children group), 86 patients (66.2%) were older than 18 years old (the adults group). Fever, trismus, neck pain, and odynophagia were the most common symptoms in both groups. Forty children (90.9%) and 49 adults (57.0%) had been treated with broadspectrum antibiotic therapy before admission. Thirty one children (70.5%) and 24 adults (27.9%) had a history of antibiotics abuse. In children group, the site most commonly involved was the parapharyngeal space (18 patients, 40.9%). In adults group, the site most commonly involved was multispace (30 patients, 34.9%). In children group, the most common cause was branchial cleft cyst (5 patients, 11.4%) and the cause remained unknown in 31 patients (70.5%). In adults group, the most common cause was pharyngeal infection (19 patients, 22.2%). All of the 27 patients with associated disease comorbidity were adults and 17 were diabetes mellitus (DM). Streptococcus viridans was the most common pathogen in both children and adults groups. Eighty six (66.2%) underwent surgical drainage and complications were found in 31 patients (4 children, 27 adults). Deep neck infection in adults is easier to have multispace involvement and lead to complications and appears to be more serious than that in children. Understanding the different characteristics between the children and adults with deep neck infection may be helpful in accurate evaluation and proper management. (Medicine 94(27):e994) Abbreviation: DM = diabetes mellitus. Editor: Liang Jin.
The Module Learning With Errors problem (M-LWE) has gained popularity in recent years for its security-efficiency balance, and its hardness has been established for a number of variants. In this paper, we focus on proving the hardness of (search) M-LWE for general secret distributions, provided they carry sufficient min-entropy. This is called entropic hardness of M-LWE. First, we adapt the line of proof of Brakerski and Döttling on R-LWE (TCC'20) to prove that the existence of certaindblp:journals/iacr/BoudgoustJRW22 fatcat:sc3ytpxvtfdmjhexzxs6prhvmq
more »... ions implies the entropic hardness of M-LWE. Then, we provide one such distribution whose required properties rely on the hardness of the decisional Module-NTRU problem.
The hardness of the learning with errors (LWE) problem is one of the most fruitful resources of modern cryptography. In particular, it is one of the most prominent candidates for secure post-quantum cryptography. Understanding its quantum complexity is therefore an important goal. We show that under quantum polynomial time reductions, LWE is equivalent to a relaxed version of the dihedral coset problem (DCP), which we call extrapolated DCP (eDCP). The extent of extrapolation varies with the LWEarXiv:1710.08223v2 fatcat:pyjatz5n3rdbtn4vcgkzmmkgxe
more »... noise rate. By considering different extents of extrapolation, our result generalizes Regev's famous proof that if DCP is in BQP (quantum poly-time) then so is LWE (FOCS'02). We also discuss a connection between eDCP and Childs and Van Dam's algorithm for generalized hidden shift problems (SODA'07). Our result implies that a BQP solution for LWE might not require the full power of solving DCP, but rather only a solution for its relaxed version, eDCP, which could be easier.
Object detection and counting are related but challenging problems, especially for drone based scenes with small objects and cluttered background. In this paper, we propose a new Guided Attention Network (GANet) to deal with both object detection and counting tasks based on the feature pyramid. Different from the previous methods relying on unsupervised attention modules, we fuse different scales of feature maps by using the proposed weakly-supervised Background Attention (BA) between thearXiv:1909.11307v1 fatcat:ukkt4xm7djap5kwmsrpxbbjofa
more »... ound and objects for more semantic feature representation. Then, the Foreground Attention (FA) module is developed to consider both global and local appearance of the object to facilitate accurate localization. Moreover, the new data argumentation strategy is designed to train a robust model in various complex scenes. Extensive experiments on three challenging benchmarks (i.e., UAVDT, CARPK and PUCPR+) show the state-of-the-art detection and counting performance of the proposed method compared with existing methods.
In a so-called partial key exposure attack one obtains some information about the secret key, e.g. via some side-channel leakage. This information might be a certain fraction of the secret key bits (erasure model) or some erroneous version of the secret key (error model). The goal is to recover the secret key from the leaked information. There is a common belief that, as opposed to e.g. the RSA cryptosystem, most post-quantum cryptosystems are usually resistant against partial key exposuredblp:journals/iacr/EsserMVW22 fatcat:dzjzmrd7xnde5oz5fnr7ol5oja
more »... ks. We strongly question this belief by constructing partial key exposure attacks on code-based, multivariate, and latticebased schemes (BIKE, Rainbow and NTRU). Our attacks exploit the redundancy that modern PQ cryptosystems inherently use for efficiency reasons. The application and development of techniques from information set decoding plays a crucial role for achieving our results. On the theoretical side, we show non-trivial information leakage bounds that allow for a polynomial time key recovery attack. As an example, for all schemes the knowledge of a constant fraction of the secret key bits suffices to reconstruct the full key in polynomial time. Even if we no longer insist on polynomial time attacks, most of our attacks extend well and remain feasible up to large erasure and error rates. In the case of BIKE for example we obtain attack complexities around 60 bits when half of the secret key bits are erased, or a quarter of the secret key bits are faulty. Our results show that even highly error-prone key leakage of modern PQ cryptosystems may lead to full secret key recoveries.
Plate-based single cell RNA-Seq (scRNA-seq) methods can detect a comprehensive profile for gene expression but suffers from high library cost of each single cell. Although cost can be reduced significantly by massively parallel scRNA-seq techniques, these approaches lose sensitivity for gene detection. Inspired by group testing and compressed sensing, here, we designed a computational framework to close the gap between sensitivity and library cost. In our framework, single cells were overlappeddoi:10.1101/338319 fatcat:e6flzm6ui5gjdhhku2sfk6vw6e
more »... assigned into plenty of pools. Expression profile of each pool was then obtained by using plate-based sequence approach. The expression profile of all single cells was recovered based on the pool expression and the overlapped pooling design. The inferred expression profile showed highly consistency with the original data in both accuracy and cell types identification. A parallel computing scheme was designed to boost speed when processing the enormous single cells, and elastic net regression was combined with compressed sensing to auto-adapt for both sparsely and densely expressed genes.
Lecture Notes in Computer Science
In this paper, we propose an intelligent recognition system for objectionable image in JPEG compression domain. First, the system applies robust skin color model and skin texture analysis to detect the skin regions in an input image based on its DC and ACs. Then, the color, texture, shape and statistical features are extracted from the skin regions and input into a decisiontree classifier for classification. A large image library including about 120,000 images is employed to evaluate the system's effectiveness.doi:10.1007/978-3-540-45080-1_164 fatcat:xii24n22uvakxmwtd456mm6cei
The convention standard for object detection uses a bounding box to represent each individual object instance. However, it is not practical in the industry-relevant applications in the context of warehouses due to severe occlusions among groups of instances of the same categories. In this paper, we propose a new task, ie, simultaneously object localization and counting, abbreviated as Locount, which requires algorithms to localize groups of objects of interest with the number of instances.arXiv:2003.08230v3 fatcat:diyqtowpfzft5auoym3m4em5v4
more »... er, there does not exist a dataset or benchmark designed for such a task. To this end, we collect a large-scale object localization and counting dataset with rich annotations in retail stores, which consists of 50,394 images with more than 1.9 million object instances in 140 categories. Together with this dataset, we provide a new evaluation protocol and divide the training and testing subsets to fairly evaluate the performance of algorithms for Locount, developing a new benchmark for the Locount task. Moreover, we present a cascaded localization and counting network as a strong baseline, which gradually classifies and regresses the bounding boxes of objects with the predicted numbers of instances enclosed in the bounding boxes, trained in an end-to-end manner. Extensive experiments are conducted on the proposed dataset to demonstrate its significance and the analysis discussions on failure cases are provided to indicate future directions. Dataset is available at https://isrc.iscas.ac.cn/gitlab/research/locount-dataset.
Lecture Notes in Computer Science
In the paper, we present an approach that exploits audio and video features to automatically segment news items. Integration of audio and visual analysis can overcome the weakness of the approach only using the image analysis techniques. It brings our approach with more adaptation to variable existence situations of news items. The proposed approach identifies silence segments in accompanying audio, and integrates with shot segmentation results, as well as anchor shot detection results, todoi:10.1007/3-540-45453-5_64 fatcat:xr523d5wz5avfm2r2i5a3yyidy
more »... mine boundaries between news items. Experiments show that the integration of audio and video features is effective to solve the problem of automatic segmentation of news items.
In this paper, we present a hybrid text segmentation approach for embedded text in images, aiming to combining the advantages of the difference-based methods and the similarity-based methods together. First a new stroke edge filter is applied to obtain stroke edge map. Then a twothreshold method based on the improved Niblack thresholding technique is utilized to identify stroke edges. Those pixels between the edge pairs above the high threshold are collected to estimate the representative ofdoi:10.1109/icme.2009.5202545 dblp:conf/icmcs/LiWHGQ09 fatcat:oa5n72iz7vgyrnuer3utev4wee
more »... oke color, so that stroke pixels are further extracted by computing the color similarity. Finally some heuristic rules are devised to integrate stroke edge and stroke region information to obtain better segmentation results. The experimental results show that our approach can effectively segment text from background.
The same crystal structure, identical particle surface morphology and the similar particle size distribution of MSn5 (M = Fe, Co and FeCo) phases are ideal for comparison of the electrochemical performance, reaction mechanism, thermodynamics and kinetics.doi:10.1039/c4ta06960a fatcat:25la56ce45d2fde5yg7aadsm64
The Module Learning With Errors problem (M-LWE) is a core computational assumption of lattice-based cryptography which offers an interesting trade-off between guaranteed security and concrete efficiency. The problem is parameterized by a secret distribution as well as an error distribution. There is a gap between the choices of those distributions for theoretical hardness results (standard formulation of M-LWE, i.e., uniform secret modulo q and Gaussian error) and practical schemes (smalldblp:journals/iacr/BoudgoustJRW22a fatcat:pbj4bnzb6bbjtmy24guqnp6uzu
more »... d secret and error). In this work, we make progress towards narrowing this gap. More precisely, we prove that M-LWE with ηbounded secret for any 2 ≤ η ≪ q and Gaussian error, in both its search and decision variants, is at least as hard as the standard formulation of M-LWE, provided that the module rank d is at least logarithmic in the ring degree n. We also prove that the search version of M-LWE with large uniform secret and uniform η-bounded error is at least as hard as the standard M-LWE problem, if the number of samples m is close to the module rank d and with further restrictions on η. The latter result can be extended to provide the hardness of M-LWE with uniform η-bounded secret and error under specific parameter conditions.
« Previous Showing results 1 — 15 out of 349 results