Filters








5,865 Hits in 2.9 sec

Coarse-to-Fine Sparse Sequential Recommendation [article]

Jiacheng Li, Tong Zhao, Jin Li, Jim Chan, Christos Faloutsos, George Karypis, Soo-Min Pantel, Julian McAuley
2022 arXiv   pre-print
To this end, we present a coarse-to-fine self-attention framework, namely CaFe, which explicitly learns coarse-grained and fine-grained sequential dynamics.  ...  Sequential recommendation aims to model dynamic user behavior from historical interactions. Self-attentive methods have proven effective at capturing short-term dynamics and long-term preferences.  ...  METHOD In this section, we propose CaFe to advance sequential recommendation performance on sparse datasets.  ... 
arXiv:2204.01839v1 fatcat:iosmbptdqrdmfijgah2v262vay

CAFE

Yikun Xian, Zuohui Fu, Handong Zhao, Yingqiang Ge, Xu Chen, Qiaoying Huang, Shijie Geng, Zhou Qin, Gerard de Melo, S. Muthukrishnan, Yongfeng Zhang
2020 Proceedings of the 29th ACM International Conference on Information & Knowledge Management  
To this end, we propose a CoArse-to-FinE neural symbolic reasoning approach (CAFE).  ...  It first generates user profiles as coarse sketches of user behaviors, which subsequently guide a path-finding process to derive reasoning paths for recommendations as fine-grained predictions.  ...  Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsors.  ... 
doi:10.1145/3340531.3412038 dblp:conf/cikm/XianFZGCHG0MMZ20 fatcat:a3z63vre6rcezcupmqxlr34b3a

A massively parallel solver for discrete Poisson-like problems

Yvan Notay, Artem Napov
2015 Journal of Computational Physics  
The sequential version is well suited to solve linear systems arising from the discretization of scalar elliptic PDEs.  ...  It is scalable in the sense that the time needed to solve a system is (under known conditions) proportional to the number of unknowns.  ...  Acknowledgments We acknowledge PRACE for awarding us access to resources CURIE (Intel Farm at CEA, France), JUQUEEN (IBM BG/Q at Juelich, Germany) and HERMIT (Cray XE6 at HLRS, Stuttgart, Germany).  ... 
doi:10.1016/j.jcp.2014.10.043 fatcat:kdo4mgsgxfaarmlijniomz2mni

Multi-view Multi-behavior Contrastive Learning in Recommendation [article]

Yiqing Wu, Ruobing Xie, Yongchun Zhu, Xiang Ao, Xin Chen, Xu Zhang, Fuzhen Zhuang, Leyu Lin, Qing He
2022 arXiv   pre-print
Multi-behavior recommendation (MBR) aims to jointly consider multiple behaviors to improve the target behavior's performance.  ...  In this work, we propose a novel Multi-behavior Multi-view Contrastive Learning Recommendation (MMCLR) framework, including three new CL tasks to solve the above challenges, respectively.  ...  Besides the coarse-grained commonalities, users' multiple behaviors also have fine-grained differences.  ... 
arXiv:2203.10576v1 fatcat:xfdmok2qnjgbxolhgvz2xh3wl4

Fast and Robust Semi-Automatic Registration of Photographs to 3D Geometry [article]

Ruggero Pintus, Enrico Gobbetti, Roberto Combet
2011 VAST: International Symposium on Virtual Reality  
A specialized sparse bundle adjustment (SBA) step, exploiting the correspondence between the model deriving from image features and the fine input 3D geometry, is then used to refine intrinsic and extrinsic  ...  We then coarsely register this model to the given 3D geometry by estimating a global scale and absolute orientation using minimal manual intervention.  ...  If this is not the case, e.g., in the presence of large drifts possibly generated by sequential SfM approaches, coarse alignment may fail.  ... 
doi:10.2312/vast/vast11/009-016 fatcat:mfxm72rkn5cvxljsummr5v2n4e

High Performance Parallel Algorithms for the Tucker Decomposition of Sparse Tensors

Oguz Kaya, Bora Ucar
2016 2016 45th International Conference on Parallel Processing (ICPP)  
We propose a coarse and a fine-grain parallel algorithm in a distributed memory environment, investigate data dependencies, and identify efficient communication schemes.  ...  We investigate an efficient parallelization of a class of algorithms for the well-known Tucker decomposition of general N -dimensional sparse tensors.  ...  We proposed a coarse and a fine-grain parallel algorithm with their corresponding task definitions, and investigated the issues of load balance and communication cost reduction on different components  ... 
doi:10.1109/icpp.2016.19 dblp:conf/icpp/KayaU16 fatcat:akadjmvgivf6hg5hiodp5eq6h4

CauseRec: Counterfactual User Sequence Synthesis for Sequential Recommendation [article]

Shengyu Zhang, Dong Yao, Zhou Zhao, Tat-seng Chua, Fei Wu
2021 arXiv   pre-print
Recent advances in sequential recommenders have convincingly demonstrated high capability in extracting effective user representations from the given behavior sequences.  ...  The results demonstrate that the proposed CauseRec outperforms state-of-the-art sequential recommenders by learning accurate and robust user representations.  ...  SOTA sequential recommenders?  ... 
arXiv:2109.05261v1 fatcat:ml4l2scfvfgh5lbdbh7r6blawq

Using coordination to parallelize sparse-grid methods for 3-D CFD problems

Kees Everaars, Barry Koren
1998 Parallel Computing  
To this end, an existing sequential computational fluid dynamics (CFDl code for a standard :;-D problem from computational aerodynamics is restructured into a parallel appl!cat1L1n.  ...  In this paper, we investigate the good parallel computing properties of sparse-grid soluMn techniques.  ...  Acknowledgements The authors want to thank Farhad Arbab for his suggestions to improve this paper.  ... 
doi:10.1016/s0167-8191(98)00043-x fatcat:c5j6pksp2bg2tnskrz22l4w5uy

Engineering fast multilevel support vector machines [article]

E. Sadrfaridpour, T. Razzaghi, I. Safro
2019 arXiv   pre-print
The experimental results demonstrate significant speed up compared to the state-of-the-art nonlinear SVM libraries.  ...  Typically, nonlinear kernels produce significantly higher classification quality to linear kernels but introduce extra kernel and model parameters which requires computationally expensive fitting.  ...  Acknowledgements We would like to thank three anonymous reviewers whose valuable comments helped to improve this paper significantly.  ... 
arXiv:1707.07657v3 fatcat:2nelqyth2nb4xmzen7fhcrro5e

High-performance algebraic multigrid solver optimized for multi-core based distributed parallel systems

Jongsoo Park, Mikhail Smelyanskiy, Ulrike Meier Yang, Dheevatsa Mudigere, Pradeep Dubey
2015 Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis on - SC '15  
While node level performance of amg is generally limited by memory bandwidth, achieving high bandwidth efficiency is challenging due to highly sparse irregular computation, such as triple sparse matrix  ...  products, sparse-matrix dense-vector multiplications, independent set coarsening algorithms, and smoothers such as Gauss-Seidel.  ...  Acknowledgements The authors first would like thank Robert Falgout for the discussion that led to this paper.  ... 
doi:10.1145/2807591.2807603 dblp:conf/sc/ParkSYMD15 fatcat:oss46ojlhrdsno2hmbibeyfk3q

A Multitask Learning Model with Multiperspective Attention and Its Application in Recommendation

Yingshuai Wang, Dezheng Zhang, Aziguli Wulamu, Jin Jing
2021 Computational Intelligence and Neuroscience  
The results show that our model consistently achieves remarkable improvements to the state-of-the-art method.  ...  To achieve more flexible parameter sharing and maintaining the special feature advantage of each task, we improve the attention mechanism at the view of expert interactive.  ...  We choose coarse-grained considering the fine-grained attention may lead to overfitting.  ... 
doi:10.1155/2021/8550270 pmid:34691173 pmcid:PMC8536436 fatcat:4fggvnrr3rcjhfjlowf5qqmi2i

Controlled GAN-Based Creature Synthesis via a Challenging Game Art Dataset – Addressing the Noise-Latent Trade-Off [article]

Vaibhav Vavilala, David Forsyth
2021 arXiv   pre-print
While noise inputs to StyleGAN2 are essential for good synthesis, we find that coarse-scale noise interferes with latent variables on this dataset because both control long-scale image effects.  ...  We apply these methods to synthesize card art, by training on a novel Yu-Gi-Oh dataset.  ...  The remaining images show the result of style-mixing the fine-scale latents from the first image in that column with the coarse-scale latents from the first image in that row. age.  ... 
arXiv:2108.08922v2 fatcat:avfdpegervdtdiy4xfqvbuxl7q

Deep Coarse-to-fine Dense Light Field Reconstruction with Flexible Sampling and Geometry-aware Fusion [article]

Jing Jin and Junhui Hou and Jie Chen and Huanqiang Zeng and Sam Kwong and Jingyi Yu
2020 arXiv   pre-print
Our proposed method, an end-to-end trainable network, reconstructs a densely-sampled LF in a coarse-to-fine manner.  ...  Specifically, the coarse sub-aperture image (SAI) synthesis module first explores the scene geometry from an unstructured sparsely-sampled LF and leverages it to independently synthesize novel SAIs, in  ...  We inherit the coarse-to-fine framework in [25] . That is, the proposed model consists of two modules, namely the coarse SAI synthesis and the efficient LF refinement.  ... 
arXiv:1909.01341v3 fatcat:ue3zpzecmjhbta7cadqexedexm

GSL4Rec: Session-based Recommendations with Collective Graph Structure Learning and Next Interaction Prediction

Chunyu Wei, Bing Bai, Kun Bai, Fei Wang
2022 Proceedings of the ACM Web Conference 2022  
We also propose a phased heuristic learning strategy to sequentially and synergistically train the graph learning part and recommendation part of GSL4Rec, thus improving the effectiveness by making the  ...  ., the coarse neighbor screening and the self-adaptive graph structure learning, to enable the exploration of potential links among all users while maintaining a tractable amount of computation for scalability  ...  For example, GRU4Rec [9] applied gated recurrent units (GRU) to model sequential behaviors and make recommendations.  ... 
doi:10.1145/3485447.3512085 fatcat:2xj7j332hzberdijifzxd4eb24

Parallel, multigrain iterative solvers for hiding network latencies on MPPs and networks of clusters

James R. McCombs, Andreas Stathopoulos
2003 Parallel Computing  
We call this combination of fine and coarse-grain parallelism multigrain.  ...  However, these solvers are usually implemented in a fine grain manner and, when scaled to large numbers of processors on MPP's, can incur significant performance penalties due to synchronization overheads  ...  Because of the sequential nature of iterative methods, it is difficult to incorporate coarse-grain parallelism into them.  ... 
doi:10.1016/s0167-8191(03)00101-7 fatcat:d3gbbga5m5dapnf4z4zvc7gdw4
« Previous Showing results 1 — 15 out of 5,865 results