Greetings! My name is Zhihua Liu (Chinese: 刘志华). I am a CHAI Postdoctoral Scholar at the University of Edinburgh, School of Engineering, under the supervision of Prof. Sotirios Tsaftaris.
Pre-previously, I served as an algorithm engineer in JD Logistics, JD.com. I received my M.Sc. in Artificial Intelligence from the University of Edinburgh in 2016, my B.Eng. in Internet of Things from University of Science and Technology Beijing in 2015. I spent my undergraduate final year at the School of Computing in University of Dundee, supervised by Prof. Stephen McKenna, Dr. Sebastian Stein and Prof. Jianguo Zhang.
My research is focusing on medical image analysis, computer vision and machine learning. With recent interests in causal representation learning for visiual reasoning, generating and understanding.
We introduce Segment Anyword, a training-free visual prompt learning framework with test-time inversed adaption for open-set language grounded segmentation, where visual prompts are simultaneously regularized by linguistic structual information.
Accurate tracking of an anatomical landmark over time has been of high interests for disease assessment such as minimally invasive surgery and tumor radiation therapy. In this paper, we propose a long-short diffeomorphic motion network, which is a multi-task framework with a learnable deformation prior to search for the plausible deformation of landmark.
Considering stateof-the-art technologies and their performance, the purpose of this paper is to provide a comprehensive survey of recently developed deep learning based brain tumor segmentation techniques.
A novel classifier, namely, Cost-sensitive Boosting Pruning Trees (CBPT), which demonstrates a strong classification ability on two publicly accessible Twitter depression detection datasets..
Benefited from deep learning techniques, remarkable progress has been made within the medical image analysis area in recent years. However, it is very challenging to fully utilize the relational information (the relationship between tissues or organs or images) within the deep neural network architecture. Thus in this thesis, we propose two novel solutions to this problem called implicit and explicit deep relational learning. We generalize these two paradigms of deep relational learning into different solutions and evaluate them on various medical image analysis tasks.
The recognition of human action is widely applied in video surveillance, virtual reality and in some human-computer interaction areas such as user experience designing tasks. Pattern recognition becomes a hot topic in the field of computer vision. In this report, I summarize human behavior recognition problem as a problem of acquiring computing data through motion detection and symbolic acting information. Then extract and understand the behavior of the action features to achieve classification target. On this basis, I review the moving object detection, motion feature extraction and movement characteristics to understand the technical analysis, the correlation method classification, and discuss the difficulties and research directions of this project.
Teaching Assistant
2020-2023
CO1104 Computer Architecture
CO4105 Advanced C++ Programming
CO3002 Analysis and Design of Algorithms
2019-2020
FS0023 STEM Foundation Year Lab-Physics
Others
I like traveling and photography. Here is my instagram.
I also like sports, especially football ⚽ and table tennis 🏓. I started to receive professional table tennis training from the age of 5, got into the school team, and gave up training in high school because of the college entrance examination.