About us:

Xpansa creates predictive analytics using AI and ML, enterprise search and global data aggregation. We use these powerful tools to extract facts and build hypotheses, knowledge management and enterprise resource planning systems. Our main targets are biotech, life sciences, and high-tech organizations. Our mission is to build systems that will help the brightest minds collaborate and create the solutions of the future.

Xpansa is a global company with a prevalent presence in Eastern Europe. We offer our colleagues a flexible working environment combining office and remote workspaces.

There is currently a job opening for one of our projects, InfinitySciences. You will be creating system that helps big pharma companies and universities worldwide predict research results. It is a data analysis platform that reports early signals and calculates probabilities in the drug and therapy discovery process..


Job description:

We are looking for the developer / engineer with extended data processing, web crawling and cyber security knowledge to implement data extraction and conversion applications. It will include but will not be limited to the data aggregation, transformation, normalization, validation, web scrapping, open data sources periodic parsing, smart crawlers implementation. You will be analysing biomedical data sources and websites, companies and personal pages. Tasks will be formulated on functional level. That means that we expect full solution and algorithms packed in “black box” for the backend and data warehouse system tiers.

An active interest in biomedical data analysis is a huge benefit in order to share a common motivation with the whole team.



BS / MS / PhD in Mathematics and Statistics / Computer Science / Engineering


Experience and Skills:

– Python / R development for 3 years min;
– Pandas;
– Deep knowledge of the data validation methods;
– PHP and JS development experience is highly desired;
– Solid understanding of HTTP, REST;
– SQL and NoSQL Databases experience. We are using MySQL and Mongo now;
– Strong Linux/Unix or OS X experience and proficiency with terminal (e.g. BASH) commands and scripts;
– Significant open source web application development experience;
– Deep understanding of cyber security. Experience of distributed IT systems protection and attacks defending;
– Experience with network sniffers (TCPDump/Wireshark);
– Use version control regularly. We use Gitlab / GitHub;
– Extracting, cleansing and aggregating data from a variety of sources including: transactional, web, third party;
– Data modelling and data architecture design experience;
– Experience of working as lead, sole or senior developer in a similar Data Software Engineering position with a strong understanding of Data mining techniques. Building web-crawlers and web-scrapers and writing complex data queries are massively important;
– Interest/ Knowledge in the biomedicine is a massive plus;
– Upper intermediate English language communication and writing skills;
– Strong knowledge of data structures, algorithms, and fundamental concepts.


Personal qualities:

– Hacker, no limits and no boundaries mindset;
– Generalist approach to data manipulation;
– Resourceful;
– Ability to deliver. Problem-solver and result-oriented;
– Punctual, responsible and reliable;
– Likes writing. Project documentation, specification design, communication via email and messengers is an integral part of the daily work;
– Quality-obsessed;
– Interested in biomedical sciences;
– Constant learner;
– Proactive.

Write Us

[contact-form-7 404 "Not Found"]

Powered by WordPress Popup

    Request a Call

    [contact-form-7 404 "Not Found"]

Powered by WordPress Popup

Cookie Settings