Filters








69 Hits in 1.2 sec

EqGNN: Equalized Node Opportunity in Graphs [article]

Uriel Singer, Kira Radinsky
2021 arXiv   pre-print
Graph neural networks (GNNs), has been widely used for supervised learning tasks in graphs reaching state-of-the-art results. However, little work was dedicated to creating unbiased GNNs, i.e., where the classification is uncorrelated with sensitive attributes, such as race or gender. Some ignore the sensitive attributes or optimize for the criteria of statistical parity for fairness. However, it has been shown that neither approaches ensure fairness, but rather cripple the utility of the
more » ... tion task. In this work, we present a GNN framework that allows optimizing representations for the notion of Equalized Odds fairness criteria. The architecture is composed of three components: (1) a GNN classifier predicting the utility class, (2) a sampler learning the distribution of the sensitive attributes of the nodes given their labels. It generates samples fed into a (3) discriminator that discriminates between true and sampled sensitive attributes using a novel "permutation loss" function. Using these components, we train a model to neglect information regarding the sensitive attribute only with respect to its label. To the best of our knowledge, we are the first to optimize GNNs for the equalized odds criteria. We evaluate our classifier over several graph datasets and sensitive attributes and show our algorithm reaches state-of-the-art results.
arXiv:2108.08800v1 fatcat:ezhrb4qqgrcjrjsizyy5bgppzu

Generating Timelines by Modeling Semantic Change [article]

Guy D. Rosin, Kira Radinsky
2019 arXiv   pre-print
Radinsky et al. (2012) showed that words that co-occur in history have a stronger relation, Rosin et al. (2017) introduced the supervised task of temporal semantic relatedness, and Orlikowski et al  ... 
arXiv:1909.09907v1 fatcat:7igzlssbczhz3j43afre2h7vgy

On Biases of Attention in Scientific Discovery [article]

Uriel Singer, Kira Radinsky, Eric Horvitz
2020 bioRxiv   pre-print
How do nuances of scientists' attention influence what they discover? We pursue an understanding of the influences of patterns of attention on discovery with a case study about confirmations of protein-protein interactions over time. We find that modeling and accounting for attention can help us to recognize and interpret biases in databases of confirmed interactions and to better understand missing data and unknowns in our fund of knowledge.
doi:10.1101/2020.04.08.002378 fatcat:edmi753q65cnri2watt3nwe2o4

Learning to Focus when Ranking Answers [article]

Dana Sagi, Tzoof Avny, Kira Radinsky, Eugene Agichtein
2018 arXiv   pre-print
Dana Sagi, Tzoof Avny, Kira Radinsky, and Eugene Agichtein Figure 4 shows the model results over the TREC-QA.  ...  arXiv:1808.02724v1 [cs.CL] 8 Aug 2018 Dana Sagi, Tzoof Avny, Kira Radinsky, and Eugene Agichtein RELATED WORK estion-answer selection and ranking has been an active area of research for decades, presenting  ... 
arXiv:1808.02724v1 fatcat:257vc7yhl5byndyz5ivtrrmc3i

What If: Generating Code to Answer Simulation Questions [article]

Gal Peretz, Kira Radinsky
2022 arXiv   pre-print
Many texts, especially in chemistry and biology, describe complex processes. We focus on texts that describe a chemical reaction process and questions that ask about the process's outcome under different environmental conditions. To answer questions about such processes, one needs to understand the interactions between the different entities involved in the process and to simulate their state transitions during the process execution under different conditions. A state transition is defined as
more » ... e memory modification the program does to the variables during the execution. We hypothesize that generating code and executing it to simulate the process will allow answering such questions. We, therefore, define a domain-specific language (DSL) to represent processes. We contribute to the community a unique dataset curated by chemists and annotated by computer scientists. The dataset is composed of process texts, simulation questions, and their corresponding computer codes represented by the DSL.We propose a neural program synthesis approach based on reinforcement learning with a novel state-transition semantic reward. The novel reward is based on the run-time semantic similarity between the predicted code and the reference code. This allows simulating complex process transitions and thus answering simulation questions. Our approach yields a significant boost in accuracy for simulation questions: 88\% accuracy as opposed to 83\% accuracy of the state-of-the-art neural program synthesis approaches and 54\% accuracy of state-of-the-art end-to-end text-based approaches.
arXiv:2204.07835v1 fatcat:ozsvanf6tzaa3knvxmg3ys2gpq

tBDFS: Temporal Graph Neural Network Leveraging DFS [article]

Uriel Singer, Haggai Roitman, Ido Guy, Kira Radinsky
2022 arXiv   pre-print
2020) , fair job prediction (Singer and Kira 2022) , human movement (Jain et al. 2016; Yan, Xiong, and Lin 2018; Feng et al. 2018) , traffic forecasting (Yu, Yin, and Zhu 2018; Cui et al. 2018 ) and  ...  Among these works, (Singer, Guy, and Radinsky 2019) learned static representations for each graph snapshot and proposed an alignment method over the different snapshots.  ... 
arXiv:2206.05692v1 fatcat:zs4ykj6scncvbblbp4fc6vcm5i

Temporal Attention for Language Models [article]

Guy D. Rosin, Kira Radinsky
2022 arXiv   pre-print
Pretrained language models based on the transformer architecture have shown great success in NLP. Textual training data often comes from the web and is thus tagged with time-specific information, but most language models ignore this information. They are trained on the textual data alone, limiting their ability to generalize temporally. In this work, we extend the key component of the transformer architecture, i.e., the self-attention mechanism, and propose temporal attention - a time-aware
more » ... -attention mechanism. Temporal attention can be applied to any transformer model and requires the input texts to be accompanied with their relevant time points. It allows the transformer to capture this temporal information and create time-specific contextualized word representations. We leverage these representations for the task of semantic change detection; we apply our proposed mechanism to BERT and experiment on three datasets in different languages (English, German, and Latin) that also vary in time, size, and genre. Our proposed model achieves state-of-the-art results on all the datasets.
arXiv:2202.02093v2 fatcat:3iq6copjpvasbpziadsj3du3ie

Learning Word Relatedness over Time [article]

Guy D. Rosin, Eytan Adar, Kira Radinsky
2017 arXiv   pre-print
(Radinsky et al., 2013; Shokouhi and Radinsky, 2012) .  ...  are weighted towards current information (Shokouhi and Radinsky, 2012) , and results tend to include the most recent and popular content.  ... 
arXiv:1707.08081v2 fatcat:drzwy42ybvcv5iwbnoupmly2ve

Named Entity Disambiguation for Noisy Text [article]

Yotam Eshel, Noam Cohen, Kira Radinsky, Shaul Markovitch, Ikuya Yamada, Omer Levy
2017 arXiv   pre-print
We address the task of Named Entity Disambiguation (NED) for noisy text. We present WikilinksNED, a large-scale NED dataset of text fragments from the web, which is significantly noisier and more challenging than existing news-based datasets. To capture the limited and noisy local context surrounding each mention, we design a neural model and train it with a novel method for sampling informative negative examples. We also describe a new way of initializing word and entity embeddings that
more » ... cantly improves performance. Our model significantly outperforms existing state-of-the-art methods on WikilinksNED while achieving comparable performance on a smaller newswire dataset.
arXiv:1706.09147v2 fatcat:7jrinhvqzzdoppnb4fty64r5em

Node Embedding over Temporal Graphs [article]

Uriel Singer and Ido Guy and Kira Radinsky
2019 arXiv   pre-print
We then achieve (2) by creating a final embedding of a , , Uriel Singer, Ido Guy, and Kira Radinsky node by a jointly learning how to combine a node's historical temporal embeddings, such that it optimizes  ... 
arXiv:1903.08889v2 fatcat:ooelumbkmrdldo2dy563vpauga

SimGANs: Simulator-Based Generative Adversarial Networks for ECG Synthesis to Improve Deep ECG Classification [article]

Tomer Golany, Daniel Freedman, Kira Radinsky
2020 arXiv   pre-print
Correspondence to: Tomer Golany <tomer.golany@cs.technion.ac.il>, Kira Radinsky <ki-rar@cs.technion.ac.il>, Daniel Freedman <danielfreed-man@google.com>.  ...  Experimental Methodology We follow the dataset partitioning as described by (De Chazal et al., 2004; Al Rahhal et al., 2016; Golany & Radinsky, 2019) .  ... 
arXiv:2006.15353v1 fatcat:rnc7mfb5dbhvdk2s5gzne22fy4

12-Lead ECG Reconstruction via Koopman Operators

Tomer Golany, Kira Radinsky, Daniel Freedman, Saar Minha
2021 International Conference on Machine Learning  
32% of all global deaths in the world are caused by cardiovascular diseases. Early detection, especially for patients with ischemia or cardiac arrhythmia, is crucial. To reduce the time between symptoms onset and treatment, wearable ECG sensors were developed to allow for the recording of the full 12-lead ECG signal at home. However, if even a single lead is not correctly positioned on the body that lead becomes corrupted, making automatic diagnosis on the basis of the full signal impossible.
more » ... this work, we present a methodology to reconstruct missing or noisy leads using the theory of Koopman Operators. Given a dataset consisting of full 12-lead ECGs, we learn a dynamical system describing the evolution of the 12 individual signals together in time. The Koopman theory indicates that there exists a high-dimensional embedding space in which the operator which propagates from one time instant to the next is linear. We therefore learn both the mapping to this embedding space, as well as the corresponding linear operator. Armed with this representation, we are able to impute missing leads by solving a least squares system in the embedding space, which can be achieved efficiently due to the sparse structure of the system. We perform an empirical evaluation using 12-lead ECG signals from thousands of patients, and show that we are able to reconstruct the signals in such way that enables accurate clinical diagnosis.
dblp:conf/icml/GolanyRFM21 fatcat:i7c7nrqdmfcdvb7ksxzpu3ebmy

Learning causality for news events prediction

Kira Radinsky, Sagie Davidovich, Shaul Markovitch
2012 Proceedings of the 21st international conference on World Wide Web - WWW '12  
The problem we tackle in this work is, given a present news event, to generate a plausible future event that can be caused by the given event. We present a new methodology for modeling and predicting such future news events using machine learning and data mining techniques. Our Pundit algorithm generalizes examples of causality pairs to infer a causality predictor. To obtain precise labeled causality examples, we mine 150 years of news articles, and apply semantic natural language modeling
more » ... iques to titles containing certain predefined causality patterns. For generalization, the model uses a vast amount of world knowledge ontologies mined from LinkedData, containing 200 datasets with approximately 20 billion relations. Empirical evaluation on real news articles shows that our Pundit algorithm reaches a human-level performance.
doi:10.1145/2187836.2187958 dblp:conf/www/RadinskyDM12 fatcat:iaqx4boqb5h7xkklb3ndf3x5gq

A word at a time

Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, Shaul Markovitch
2011 Proceedings of the 20th international conference on World wide web - WWW '11  
Computing the degree of semantic relatedness of words is a key functionality of many language applications such as search, clustering, and disambiguation. Previous approaches to computing semantic relatedness mostly used static language resources, while essentially ignoring their temporal aspects. We believe that a considerable amount of relatedness information can also be found in studying patterns of word usage over time. Consider, for instance, a newspaper archive spanning many years. Two
more » ... ds such as "war" and "peace" might rarely co-occur in the same articles, yet their patterns of use over time might be similar. In this paper, we propose a new semantic relatedness model, Temporal Semantic Analysis (TSA), which captures this temporal information. The previous state of the art method, Explicit Semantic Analysis (ESA), represented word semantics as a vector of concepts. TSA uses a more refined representation, where each concept is no longer scalar, but is instead represented as time series over a corpus of temporally-ordered documents. To the best of our knowledge, this is the first attempt to incorporate temporal evidence into models of semantic relatedness. Empirical evaluation shows that TSA provides consistent improvements over the state of the art ESA results on multiple benchmarks.
doi:10.1145/1963405.1963455 dblp:conf/www/RadinskyAGM11 fatcat:l5omyn5bhzfbjojef7m3f4btaa

Predicting content change on the web

Kira Radinsky, Paul N. Bennett
2013 Proceedings of the sixth ACM international conference on Web search and data mining - WSDM '13  
Accurate prediction of changing web page content improves a variety of retrieval and web related components. For example, given such a prediction algorithm one can both design a better crawling strategy that only recrawls pages when necessary as well as a proactive mechanism for personalization that pushes content associated with user revisitation directly to the user. While many techniques for modeling change have focused simply on past change frequency, our work goes beyond that by
more » ... y studying the usefulness in page change prediction of: the page's content; the degree and relationship among the prediction page's observed changes; the relatedness to other pages and the similarity in the types of changes they undergo. We present an expert prediction framework that incorporates the information from these other signals more effectively than standard ensemble or basic relational learning techniques. In an empirical analysis, we find that using page content as well as related pages significantly improves prediction accuracy and compare it to common approaches. We present numerous similarity metrics to identify related pages and focus specifically on measures of temporal content similarity. We observe that the different metrics yield related pages that are qualitatively different in nature and have different effects on the prediction performance.
doi:10.1145/2433396.2433448 dblp:conf/wsdm/RadinskyB13 fatcat:vnoalfvfsrdsvc73ew3dip27h4
« Previous Showing results 1 — 15 out of 69 results