<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Pages on From William's Desk</title><link>https://www.william-teo.com/page/</link><description>Recent content in Pages on From William's Desk</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Sun, 16 May 2021 15:42:05 +0800</lastBuildDate><atom:link href="https://www.william-teo.com/page/index.xml" rel="self" type="application/rss+xml"/><item><title>About</title><link>https://www.william-teo.com/page/about/</link><pubDate>Sun, 16 May 2021 15:42:05 +0800</pubDate><guid>https://www.william-teo.com/page/about/</guid><description>&lt;div&gt;
 &lt;img src="https://www.william-teo.com/william.jpg" alt="headshot"
 style="width: 300px; height: 300px; object-fit: cover; object-position: left; border-radius: 50%" /&gt;
&lt;/div&gt;
&lt;p&gt;Hi, I am William. This is my personal blog where I write about my interests (which are many) and scratch my childhood itch of being a writer.&lt;/p&gt;
&lt;h2 id="research-interests"&gt;Research Interests&lt;/h2&gt;
&lt;p&gt;My current research interests focus on how multiple agents learn to act well over time: the mechanics of reinforcement learning, the strategic reasoning captured by game theory, and what it takes to develop genuine skill in hard domains. A common theme running through all of this is representation learning — the question of what an agent (or a person) learns to notice and abstract from experience, and how the quality of those internal representations shapes everything downstream.&lt;/p&gt;</description></item></channel></rss>