A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is
Figure 1: Given a 3D architectural model with user-specified backdrop and ground (left), our algorithm automatically creates a paper architecture approximating the model (mid-right, with the planar layout in mid-left), which can be physically engineered and popped-up (right).doi:10.1145/1833351.1778848 fatcat:cio2xvvmmjchvicsywhxgn7o6u
Figure 1 : Given a 3D architectural model with user-specified backdrop and ground (left), our algorithm automatically creates a paper architecture approximating the model (mid-right, with the planar layout in mid-left), which can be physically engineered and popped-up (right). Abstract Paper architectures are 3D paper buildings created by folding and cutting. The creation process of paper architecture is often laborintensive and highly skill-demanding, even with the aid of existingdoi:10.1145/1778765.1778848 fatcat:jhphuezf4vbjlew3qnwqrnnxty
more »... d design tools. We propose an automatic algorithm for generating paper architectures given a user-specified 3D model. The algorithm is grounded on geometric formulation of planar layout for paper architectures that can be popped-up in a rigid and stable manner, and sufficient conditions for a 3D surface to be poppedup from such a planar layout. Based on these conditions, our algorithm computes a class of paper architectures containing two sets of parallel patches that approximate the input geometry while guaranteed to be physically realizable. The method is demonstrated on a number of architectural examples, and physically engineered results are presented.
ACM SIGGRAPH 2010 papers on - SIGGRAPH '10
Figure 1: Given a 3D architectural model with user-specified backdrop and ground (left), our algorithm automatically creates a paper architecture approximating the model (mid-right, with the planar layout in mid-left), which can be physically engineered and popped-up (right).doi:10.1145/1833349.1778848 fatcat:hbmligjm2zgrxknedu57pinw6y
An extra splitting plane in Si is then added in the position of z t j ( Figure 5 (bottom)) if zi − min(zi, z t j ) / zi − zi−1 is above a certain threshold τr (typically 25% used in our experiments). ...doi:10.1145/2070781.2024218 fatcat:yll2dxlnpvhf5fu66xxims6dmy
X-linked inhibitor of apoptosis (XIAP)-associated factor 1 (XAF1), a XIAP-binding protein, is a tumor suppressor gene. XAF1 was silent or expressed lowly in most human malignant tumors. However, the role of XAF1 in hepatocellular carcinoma (HCC) remains unknown. In this study, we investigated the effect of XAF1 on tumor growth and angiogenesis in hepatocellular cancer cells. Our results showed that XAF1 expression was lower in HCC cell lines SMMC-7721, Hep G2 and BEL-7404 and liver cancerdoi:10.18632/oncotarget.2114 pmid:24980821 pmcid:PMC4170645 fatcat:5jackdizvjff3km2w3o7zuc23a
more »... s than that in paired non-cancer liver tissues. Adenovirus-mediated XAF1 expression (Ad5/F35-XAF1) significantly inhibited cell proliferation and induced apoptosis in HCC cells in dose-and time-dependent manners. Infection of Ad5/F35-XAF1 induced cleavage of caspase -3, -8, -9 and PARP in HCC cells. Furthermore, Ad5/ F35-XAF1 treatment significantly suppressed tumor growth in a xenograft model of liver cancer cells. Western Blot and immunohistochemistry staining showed that Ad5/ F35-XAF1 treatment suppressed expression of vascular endothelial growth factor (VEGF), which is associated with tumor angiogenesis, in cancer cells and xenograft tumor tissues. Moreover, Ad5/F35-XAF1 treatment prolonged the survival of tumorbearing mice. Our results demonstrate that XAF1 inhibits tumor growth by inducing apoptosis and inhibiting tumor angiogenesis. XAF1 may be a promising target for liver cancer treatment. 57. Walsby E, Pratt G, Shao H, Abbas AY, Fischer PM, Bradshaw TD, Brennan P, Fegan C, Wang S and Pepper C. A novel Cdk9 inhibitor preferentially targets tumor cells and synergizes with fludarabine. Oncotarget. 2013; 4(9).
We introduce Chinese Text in the Wild, a very large dataset of Chinese text in street view images. While optical character recognition (OCR) in document images is well studied and many commercial tools are available, detection and recognition of text in natural images is still a challenging problem, especially for more complicated character sets such as Chinese text. Lack of training data has always been a problem, especially for deep learning methods which require massive training data. InarXiv:1803.00085v1 fatcat:m23eej5gwnds5cqjbjadl4o5b4
more »... paper we provide details of a newly created dataset of Chinese text with about 1 million Chinese characters annotated by experts in over 30 thousand street view images. This is a challenging dataset with good diversity. It contains planar text, raised text, text in cities, text in rural areas, text under poor illumination, distant text, partially occluded text, etc. For each character in the dataset, the annotation includes its underlying character, its bounding box, and 6 attributes. The attributes indicate whether it has complex background, whether it is raised, whether it is handwritten or printed, etc. The large size and diversity of this dataset make it suitable for training robust neural networks for various tasks, particularly detection and recognition. We give baseline results using several state-of-the-art networks, including AlexNet, OverFeat, Google Inception and ResNet for character recognition, and YOLOv2 for character detection in images. Overall Google Inception has the best performance on recognition with 80.5% top-1 accuracy, while YOLOv2 achieves an mAP of 71.0% on detection. Dataset, source code and trained models will all be publicly available on the website.
Figure 1 : Without any user intervention, our framework automatically turns a freehand sketch drawing depicting multiple scene objects (left) to semantically valid, well arranged scenes of 3D models (right). (The ground and walls were manually added.) Abstract This work presents Sketch2Scene, a framework that automatically turns a freehand sketch drawing inferring multiple scene objects to semantically valid, well arranged scenes of 3D models. Unlike the existing works on sketch-based searchdoi:10.1145/2461912.2461968 fatcat:4onmb2orrne6jklskebgswhxrm
more »... composition of 3D models, which typically process individual sketched objects one by one, our technique performs co-retrieval and co-placement of 3D relevant models by jointly processing the sketched objects. This is enabled by summarizing functional and spatial relationships among models in a large collection of 3D scenes as structural groups. Our technique greatly reduces the amount of user intervention needed for sketch-based modeling of 3D scenes and fits well into the traditional production pipeline involving concept design followed by 3D modeling. A pilot study indicates that it is promising to use our technique as an alternative but more efficient tool of standard 3D modeling for 3D scene construction. Links: DL PDF WEB
Computational Visual Media Vol. 6, No. 2, 113-133, 2020.  Or Patashnik, Min Lu, Amit H. Bermano, and Daniel Cohen-Or. Temporal scatterplots. ...doi:10.1007/s41095-021-0224-x fatcat:om36nij3tvhxnknf3b74ck5gcu
The image combination optimization is also perfromed in parallel, and takes about 1 min. ... AS: average score; TT: total time; IT: interaction time. filtering) each scene item, and about 3-4 mins to process the background. ...doi:10.1145/1618452.1618470 fatcat:uz5sac5rurdydkizlnzyn4q7xi
The Visual Computer
Shi-Min Hu received the PhD degree in 1996 from Zhejiang University. He is currently a professor of computer science at Tsinghua University. ...doi:10.1007/s00371-006-0047-x fatcat:dcgcu7ucgnhajghz4v3dn3bw5m
This paper presents a new approach for reconstructing solids with planar, quadric and toroidal surfaces from three-view engineering drawings. By applying geometric theory to 3-D reconstruction, our method is able to remove restrictions placed on the axes of curved surfaces by existing methods. The main feature of our algorithm is that it combines the geometric properties of conics with af®ne properties to recover a wider range of 3-D edges. First, the algorithm determines the type of each 3-Ddoi:10.1016/s0010-4485(00)00143-3 fatcat:nti32b237ngsdbvkzsipmw2nay
more »... ndidate conic edge based on its projections in three orthographic views, and then generates that candidate edge using the conjugate diameter method. This step produces a wire-frame model that contains all candidate vertices and candidate edges. Next, a maximum turning angle method is developed to ®nd all the candidate faces in the wire-frame model. Finally, a general and ef®cient searching technique is proposed for ®nding valid solids from the candidate faces; the technique greatly reduces the searching space and the backtracking incidents. Several examples are given to demonstrate the ef®ciency and capability of the proposed algorithm.
I would like to take this opportunity to thank everyone who has helped to make Computational Visual Media a success in its second year of 2016. In particular, my thanks go to the authors, the reviewers, and the Editorial Board members, as well as the staff of Tsinghua University Press and Springer. Your combined efforts have helped to ensure that all four issues for 2016 were published on schedule, before the end of the year. 31 papers were published in 4 issues in 2016, including regulardoi:10.1007/s41095-017-0079-3 fatcat:czqruufvjbhhtnxg7zfewopxde
more »... and papers recommended to us by the CVM conference and Pacific Graphics. The acceptance rate for regular papers was 37.5%. Following last year's success, Tsinghua University Press has sponsored an annual award for the best paper published in Computational Visual Media. After careful selection by the Editorial Board amongst the 31 papers published in 2016, the paper: An interactive approach for functional prototype recovery from a single RGBD image  has won the best paper award, and three other papers: User controllable anisotropic shape distribution on 3D meshes , Comfort-driven disparity adjustment for stereoscopic video , and VoxLink-Combining sparse volumetric data and geometry for efficient rendering , have won honorable mention awards.
Automatic segmentation of images, a much researched topic in image processing and computer vision [Shi and Malik 2000; Boykov and Lea 2006; Paris and Durand 2007] , is often ill-posed and unlikely to ...doi:10.1145/1833351.1778820 fatcat:xxpcj7r3ajd4vlqf62fttvmtwi
Shi-Min Hu Computational Visual Media is an ideal vehicle in which to publish and disseminate relevant research findings, and in which to exchange novel research ideas and significant practical results ...doi:10.1007/s41095-015-0001-9 fatcat:6sc2la65nzc5jlshg7zfxpvpry
« Previous Showing results 1 — 15 out of 35,013 results