Plagiarism detection has evolved significantly in recent years, partly in response to the media attention attracted by high-profile plagiarism cases involving journalists and politicians. A culture of control has been establishing itself to guarantee integrity and honesty in all areas related to copyright and authorship, including implementation of policies, codes of conduct, tariffs of penalties, and matching detection software. The latter has dramatically improved alongside the technological developments over the years. Currently instances of linguistic plagiarism can easily be matched to the original, while pointing out differences between the plagiarised and the plagiarising texts. These methods work particularly well with same language texts; however, systematically detecting translingual plagiarism - i.e. where a derivative text copies from a source in another language without attribution - remains a problem area. This is especially so because the possibilities of combining language pairs are immense, thus requiring an enormous data processing power. This session presents illustrative cases of translingual plagiarism and discusses some of the approaches adopted by forensic linguists to investigate and prove that a certain translated text is an instance of plagiarism. The keynote concludes by encouraging a discussion of computational approaches that can be adopted to assist forensic linguists in their own investigation.