Languages change with time. While the study of meaning is not a new thing, the past few years have seen a lot of attention given to the computational tackling of particular tasks within (historical) semantics. Advances in NLP and the availability of massive textual corpora have made possible new research methods focused on lexical semantic change, using a range of approaches varying from topic models to neural word embeddings. Knowing what a word means at a particular moment of time is crucial for text-based research in the humanities. While current computational methods present solutions for some contexts (typically recent English, with clean data), this growing community lacks an extensive overview of existing work on the one hand, and an interdisciplinary discussion between major parties and fields interested in the topic on the other. In all works, the evaluation of results and the comparison between approaches is close to impossible: the lack of a standardised definition of what lexical semantic change is both in general terms and with regard to different corpora and time spans, as well as the inadequacy (or absence) of current evaluation frameworks, makes reproducing methods in other contexts very difficult.