Research themes

We work on natural language processing (NLP). We’re active in a number of research areas, including:

  • Dependency parsing. When understanding a sentence, we need to know who did what to whom – i.e., how the words relate to each other. Parsing is a process for understanding these relations. In dependency parsing, a sentence’s syntactic structure is described using the sentence’s words and a set of relations that connect the words. ITU NLP works on building parsing tools and improving parsing practices.
  • Social media. Although there’s a lot of unimportant-seeming noise and chatter on social media, it’s actually very useful – not just for targeting politics and business strategy, but also for detecting virus outbreaks and earthquakes. The highly varied language on social media is difficult to process. We focus on techniques for processing this language and ways of using social media intelligence.
  • Multilingual NLP. Lots of research is done on English – but there are approximately 7000 known living languages, separated over 128 language family groups. So it’s very important to get the state of the art to work in more languages than English. As well as covering many others, ITU NLP includes special focus on the Danish languages.
  • Stance detection & fake news analysis. We can estimate how true or false an online claim is by measuring the reaction around it – the stance people take to it. At ITU NLP we continue work on veracity in digital media.
  • Entity detection. Finding where people, organizations and places are mentioned in text is really important for many tasks – building automatic summaries, doing business intelligence, and so on. These names are called entities, which can include things like names of drugs, genes, products, and so on. Finding these names well is tough, and a theme of research at ITU NLP.
  • Deep learning approaches. Language is tough to process, and so at ITU we use modern deep learning techniques to address this huge AI challenge. We’re interested in multi-task learning, transfer learning, efficient nets, and working with new and powerful toolkits, and have a selection of GPU resources for our research computing.
  • Representation learning. It’s difficult to map the language of humans, with words, to the language of computers, with numbers. Finding a way of representing words using numbers can be done automatically, which is called representation learning. At ITU we’re particularly interested in learning multilingual representations, learning representations across different domains (a domain is a specific type of language, like news articles, conversation, doctor’s notes and so on), and distributional clustering.

Sounds exciting? Talk to us about how we can collaborate, or have a look at our Data Science and Computer Science programs.