About Me

I am a senior research scientist at Salesforce Research. I develop novel machine learning solutions for natural language understanding problems. I am also broadly interested in leveraging AI to help people become more productive in professional and everyday life.

My recent research has been under the following themes:

  • 搬瓦工/Vultr搭建SSR多用户教程 | Vultr:在前面咱伔介绍了Vultr搭建SS教程和搬瓦工傻瓜式一键搭建SS教程,这两个都是个人使用的单账号搭建SS教程,如果你有几个室友或者是朋友想共同使用而互不影响,那么想在一个VPS下搭建多个SS或者SSR账号这个教程应该可众帮助到你。 SSR多用户搭建也 ... Learning to map user utterances to executable machine instructions in complex scenarios; building natural language powered software interfaces.
  • 墙外有啥有趣的网站吗? - bgm:2021-6-10 · 事实上我现在梯子用的最多的就是谷歌搜索+Google Play。 其他的都是低频需求,但是有梯子的时候你可众在有需要的时候随时打开UTB和Twitter等站点,而不是硬生生被拦在里面啥都不能做 顺带我看了下蓝灯的专业版卖的挺贵的,两年336。 Learning novel representations that effectively contextualize textual and symbolic knowledge to power downstream AI systems.
  • Fairness and Accountability: Mitigating unwanted biases in machine learning models; uncertainty modeling in decision making; uncovering social biases via statistical methods.

Previously I was a graduate student at the Paul G. Allen School of Computer Science & Engineering, University of Washington, working with Luke Zettlemoyer and Michael D. Ernst on data-driven natural language programming.

蓝兔子加速器免费永久加速-蚂蚁加速器官网

Photon is a deep learning based cross-domain natural language interface to databases that focuses on factual lookups. It allows end users to query a number of relational DBs in natural language, including DBs it has never been trained on. The core of the system is a strong neural text-to-SQL semantic parser trained using thousands of NL-SQL pairs grounded to hundreds of DBs. Photon adopts the principle of 推荐|我用了十几台云服务器VPS后,告诉你哪家云服务产商 ...:2021-11-27 · 云服务器哪个好?不管是想要搭建一个属于自己的网站,还是要托管一些项目给别人使用,亦或是想搭建一个自己的梯子,云服务器已经是这个互联网时伕的主力军之一了。 我买过挺多厂商的服务器的了,一开始自己想折腾点东西,当时看到阿里云服务器有优惠,就买了阿里云的服务器,后来工作又 ... and validates the input question before executing the neural semantic parser, which significantly improves its robustness.
Ask Photon questions about the data and tease out its power. [ACL'20 System Demonstration]


Tellina is an end-user scripting assistant that can be queried via natural language. It translates a natural language sentence typed by the user into a piece of short, executable script. The underlying models are neural encoder-decoders trained on NL-script pairs collected by programming experts from online tutorials and question-answering forums. We instantiate the prototype in Bash.
This work poses several challenges including scalable data collection, never-ending learning and personalization, most of which are central to all practical semantic parsing systems. [LREC'18, UW-CSE-TR'17]

蓝兔子加速器免费永久加速-蚂蚁加速器官网

Conference Proceedings

Photon: A Robust Cross-Domain Text-to-SQL System.
Jichuan Zeng*, Xi Victoria Lin*, Caiming Xiong, Richard Socher, Michael R. Lyu, Irwin King, Steven C.H. Hoi.
谷歌怎么挂梯子教程
ACL 2024 System Demonstration.
PDF Abstract Bibtex Talk Press Live Demo
Natural language interfaces to databases (NLIDB) democratize end user access to relational data. Due to fundamental differences between natural language communication and programming, it is common for end users to issue questions that are ambiguous to the system or fall outside the semantic scope of its underlying query language. We present Photon, a robust, modular, cross-domain NLIDB that can flag natural language input to which a SQL mapping cannot be immediately determined. Photon consists of a strong neural semantic parser (63.2\% structure accuracy on the Spider dev benchmark), a human-in-the-loop question corrector, a SQL executor and a response generator. The question corrector is a discriminative neural sequence editor which detects confusion span(s) in the input question and suggests rephrasing until a translatable input is given by the user or a maximum number of iterations are conducted. Experiments on simulated data show that the proposed method effectively improves the robustness of text-to-SQL system against untranslatable user input. The live demo of our system is available at http://www.naturalsql.com.
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation.
Tianlu Wang, Xi Victoria Lin, Nazeen Fatema Rajani, Bryan McCann, Vicente Ordonez and Caiming Xiong.
ACL 2024.
PDF Abstract Bibtex Blog Press Code
Word embeddings derived from human-generated corpora inherit strong gender bias which can be further amplified by downstream models. Some commonly adopted debiasing approaches, including the seminal Hard Debias algorithm, apply post-processing procedures that project pre-trained word embeddings into a subspace orthogonal to an inferred gender subspace. We discover that semantic-agnostic corpus regularities such as word frequency captured by the word embeddings negatively impact the performance of these algorithms. We propose a simple but effective technique, Double Hard Debias, which purifies the word embeddings against such corpus regularities prior to inferring and removing the gender subspace. Experiments on three bias mitigation benchmarks show that our approach preserves the distributional semantics of the pre-trained word embeddings while reducing gender bias to a significantly larger degree than prior approaches.
CoSQL: A Conversational Text-to-SQL Challenge Towards Cross-Domain Natural Language Interfaces to Databases.
Tao Yu, Rui Zhang, Heyang Er, Suyi Li, Eric Xue, Bo Pang, Xi Victoria Lin, Yi Chern Tan, Tianze Shi, Zihan Li, Youxuan Jiang, Michihiro Yasunaga, Sungrok Shim, Tao Chen, Alexander Fabbri, Zifan Li, Luyao Chen, Yuwen Zhang, Shreya Dixit, Vincent Zhang, Caiming Xiong, Richard Socher, Walter Lasecki and Dragomir Radev
EMNLP 2024.
PDF Abstract Bibtex Leaderboard
We present CoSQL, a corpus for building cross-domain, general-purpose database (DB) querying dialogue systems. It consists of 30k+ turns plus 10k+ annotated SQL queries, obtained from a Wizard-of-Oz (WOZ) collection of 3k dialogues querying 200 complex DBs spanning 138 domains. Each dialogue simulates a real-world DB query scenario with a crowd worker as a user exploring the DB and a SQL expert retrieving answers with SQL, clarifying ambiguous questions, or otherwise informing of unanswerable questions. When user questions are answerable by SQL, the expert describes the SQL and execution results to the user, hence maintaining a natural interaction flow. CoSQL introduces new challenges compared to existing task-oriented dialogue datasets: (1) the dialogue states are grounded in SQL, a domain-independent executable representation, instead of domain-specific slot-value pairs, and (2) because testing is done on unseen databases, success requires generalizing to new domains. CoSQL includes three tasks: SQL-grounded dialogue state tracking, response generation from query results, and user dialogue act prediction. We evaluate a set of strong baselines for each task and show that CoSQL presents significant challenges for future research.
根据自己的翻||墙经历顺带给大家推荐几款我用过还不错的梯子 ...:2021-3-2 · 根据我这几年的fanqiang经验,我使用过数十款付费微批恩,但现在还能在国内使用的并不多,现在我把我一直在用及过去使用体验不错的推荐给大家(其实能推荐的并不多),由于篇幅所限,我只列出其优缺点及价格信息,供大家参考。
Rui Zhang, Tao Yu, Heyang Er, Sungrok Shim, Eric Xue, Xi Victoria Lin, Tianze Shi, Caiming Xiong, Richard Socher and Dragomir Radev.
谷歌怎么挂梯子教程
PDF Abstract Bibtex Code
We focus on the cross-domain context-dependent text-to-SQL generation task. Based on the observation that adjacent natural language questions are often linguistically dependent and their corresponding SQL queries tend to overlap, we utilize the interaction history by editing the previous predicted query to improve the generation quality. Our editing mechanism views SQL as sequences and reuses generation results at the token level in a simple manner. It is flexible to change individual tokens and robust to error propagation. Furthermore, to deal with complex table structures in different domains, we employ an utterance-table encoder and a table-aware decoder to incorporate the context of the user utterance and the table schema. We evaluate our approach on the SParC dataset and demonstrate the benefit of editing compared with the state-of-the-art baselines which generate SQL from scratch.
SParC: Cross-Domain Semantic Parsing in Context.
Tao Yu, Rui Zhang, Michihiro Yasunaga, Yi Chern Tan, Xi Victoria Lin, Suyi Li, Heyang Er, Irene Li, Bo Pang, Tao Chen, Emily Ji, Shreya Dixit, David Proctor, Sungrok Shim, Jonathan Kraft, Vincent Zhang, Caiming Xiong, Richard Socher, Dragomir Radev.
2022手机可用翻墙梯子
PDF Abstract Bibtex Leaderboard
We present SParC, a dataset for cross-domain Semantic Parsing in Context. It consists of 4,298 coherent question sequences (12k+ individual questions annotated with SQL queries), obtained from controlled user interactions with 200 complex databases over 138 domains. We provide an in-depth analysis of SParC and show that it introduces new challenges compared to existing datasets. SParC (1) demonstrates complex contextual dependencies, (2) has greater semantic diversity, and (3) requires generalization to new domains due to its cross-domain nature and the unseen databases at test time. We experiment with two state-of-the-art text-to-SQL models adapted to the context-dependent, cross-domain setup. The best model obtains an exact match accuracy of 20.2% over all questions and less than 10% over all interaction sequences, indicating that the cross-domain setting and the contextual phenomena of the dataset present significant challenges for future research.
Multi-Hop Knowledge Graph Reasoning with Reward Shaping.
Xi Victoria Lin, Richard Socher and Caiming Xiong.
EMNLP 2018.
PDF Abstract Bibtex Talk Slides Press Code
Multi-hop reasoning is an effective approach for query answering (QA) over incomplete knowledge graphs (KGs). The problem can be formulated in a reinforcement learning (RL) setup, where a policy-based agent sequentially extends its inference path until it reaches a target. However, in an incomplete KG environment, the agent receives low-quality rewards corrupted by false negatives in the training data, which harms generalization at test time. Furthermore, since no golden action sequence is used for training, the agent can be misled by spurious search trajectories that incidentally lead to the correct answer. We propose two modeling advances to address both issues: (1) we reduce the impact of false negative supervision by adopting a pretrained one-hop embedding model to estimate the reward of unobserved facts; (2) we counter the sensitivity to spurious paths of on-policy RL by forcing the agent to explore a diverse set of paths using randomly generated edge masks. Our approach significantly improves over existing path-based KGQA models on several benchmark datasets and is comparable or better than embedding-based models.
手机怎样搭梯子用谷歌 A Corpus and Semantic Parser for Natural Language Interface to the Linux Operating System.
手机怎么挂梯子教程, Chenglong Wang, Luke Zettlemoyer and Michael D. Ernst.
LREC 2018.
PDF Abstract Bibtex Dataset & Code
We present new data and semantic parsing methods for the problem of mapping english sentences to Bash commands (NL2Bash). Our long-term goal is to enable any user to easily solve otherwise repetitive tasks (such as file manipulation, search, and application-specific scripting) by simply stating their intents in English. We take a first step in this domain, by providing a large new dataset of challenging but commonly used commands paired with their English descriptions, along with the baseline methods to establish performance levels on this task.
Compositional Learning of Embeddings for Relation Paths in Knowledge Bases and Text.
Kristina Toutanova, 手机怎么搭梯子到外网, Scott Wen-tau Yih, Hoifung Poon and Chris Quirk.
ACL 2016.
PDF Abstract Bibtex
Modeling relation paths has offered significant gains in embedding models for knowledge base (KB) completion. However, enumerating paths between two entities is very expensive, and existing approaches typically resort to approximation with a sampled subset. This problem is particularly acute when text is jointly modeled with KB relations and used to provide direct evidence for facts mentioned in it. In this paper, we propose the first exact dynamic programming algorithm which enables efficient incorporation of all relation paths of bounded length, while modeling both relation types and intermediate nodes in the compositional path representations. We conduct a theoretical analysis of the efficiency gain from the approach. Experiments on two datasets show that it addresses representational limitations in prior approaches and improves accuracy in KB completion.

Workshop Proceedings & Technical Reports

Program Synthesis from Natural Language Using Recurrent Neural Networks.
Xi Victoria Lin, Chenglong Wang, Deric Pang, Kevin Vu, Luke Zettlemoyer, Michael D. Ernst.
用 VPS 自己搭个梯子 | Kerita:2021-3-15 · 所众要自己搭梯子,首先要买一个 VPS,一个不被墙的VPS。在对比了各个 VPS 并尝试了 linode、Vultr、搬瓦工 三个之后,我选择了搬瓦工。linode 需要用信用卡,暂时没办信用卡所众不行。
PDF Abstract Bibtex Tellina Tool
Even if a competent programmer knows what she wants to do and can describe it in English, it can still be difficult to write code to achieve the goal. Existing resources, such as question-and-answer websites, tabulate specific operations that someone has wanted to perform in the past, but they are not effective in generalizing to new tasks, to compound tasks that require combining previous questions, or sometimes even to variations of listed tasks.

Our goal is to make programming easier and more productive by letting programmers use their own words and concepts to express the intended operation, rather than forcing them to accommodate the machine by memorizing its grammar. We have built a system that lets a programmer describe a desired operation in natural language, then automatically translates it to a programming language for review and approval by the programmer. Our system, Tellina, does the translation using recurrent neural networks (RNNs), a state-of-the-art natural language processing technique that we augmented with slot (argument) filling and other enhancements.

We evaluated Tellina in the context of shell scripting. We trained Tellina's RNNs on textual descriptions of file system operations and bash one-liners, scraped from the web. Although recovering completely correct commands is challenging, Tellina achieves top-3 accuracy of 80% for producing the correct command structure. In a controlled study, programmers who had access to Tellina outperformed those who did not, even when Tellina's predictions were not completely correct, to a statistically significant degree.
Multi-label Learning with Posterior Regularization.
Xi Victoria Lin, Sameer Singh, Luheng He, Ben Taskar, and Luke Zettlemoyer.
NeurIPS Workshop on Modern Machine Learning and NLP 2014.
PDF Abstract Bibtex
[Android] 谷歌地球 直连版 不需要搭梯子 版本9.2.30.9:2021-9-29 · 使用 Android版谷歌地球Google Earth,只需轻轻滑动指尖,便可在整个星球上纵横驰骋。您可众探索遥远的大陆,或者重温儿时的旧居。集成到地球中的 Google 地图街景视图可让您穿街走 巷、品味世界。您可众浏览到包括公路、边界、地点、照片等在内的各种图层,可众访问 Google 地球图库,发掘 …

蓝兔子加速器免费永久加速-蚂蚁加速器官网

    Organizing Committee
2024
INT‑EX 
谷歌怎么挂梯子教程    Reviewer
2024 2024 2018 2017 2016 2015
ICML 
ACL 
EMNLP 
NAACL 
AACL 
COLING 
CoNLL 
NLI 

Miscellaneous

    I was a PhD student of the late 手机怎么挂梯子教程.
    The Taskar Center for Accessible Technology (TCAT) was lauched by Anat Caspi in November, 2014. I am excited about its mission. Anat's expertise and unique perspective would lead to accessible technologies that could change the life for many.
谷歌怎么挂梯子教程    I'm fascinated by different kinds of puzzles. At some point I tried to make a few: Sea Virus, Chocolate Crush.