[Links] Medical Image Analysis

This post is continually updated with medical image analysis papers I consider relevant. I did not read all of them yet. Pitfalls and recommendations for machine learning papers on covid-19 diagnosis/prognosis (transferable to other diseases): Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans […]

[Summary] Accuracy of Machine Learning Models versus “Hand-Crafted” Expert Systems – A Credit Scoring Case Study

Paper: Ben-David, A., & Frank, E. (2009). Accuracy of machine learning models versus “hand crafted” expert systems–a credit scoring case study. Expert Systems with Applications, 36(3), 5264-5271. One line summary: Authors compare the accuracy of an expert system and several machine learning models in the task of predicting credit scoring. The evaluation uses 10-fold cross-validation […]

[Summary] Learning to Summarize from Human Feedback

Paper: Stiennon, N., Ouyang, L., Wu, J., Ziegler, D. M., Lowe, R., Voss, C., … & Christiano, P. (2020). Learning to summarize from human feedback. arXiv preprint arXiv:2009.01325. Disclaimer: Most of the following is extracted from the paper and edited by me to allow the summarization of key points. One line summary: Train several algorithms for […]

[Summary] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

Paper: Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Disclaimer: Most of the following is extracted from the paper and edited by me to allow the summarization of key points. One line summary: Train a bidirectional transformers in the tasks of […]

[Summary] Self-training with Noisy Student improves ImageNet classification

Paper: Xie, Q., Luong, M. T., Hovy, E., & Le, Q. V. (2020). Self- training with noisy student improves imagenet classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10687-10698). Disclaimer: Most of the following is extracted from the paper and edited by me to allow the summarization of key points. One […]

[Summary] Big Transfer (BiT): General Visual Representation Learning

Paper: Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., & Houlsby, N. (2019). Big transfer (bit): General visual representation learning. arXiv preprint arXiv:1912.11370, 6(2), 8. Disclaimer: Most of the following is extracted from the paper and edited by me to allow the summarization of key points. One line summary: Authors trained big resnets […]

[Summary] Descending through a Crowded Valley – Benchmarking Deep Learning Optimizers

Paper: Schmidt, R. M., Schneider, F., & Hennig, P. (2020). Descending through a Crowded Valley–Benchmarking Deep Learning Optimizers. arXiv preprint arXiv:2007.01547. Disclaimer: Most of the following is extracted from the paper and edited by me to allow the summarization of key points. One line summary: Authors compare the performance of optimizers for several values of hyperparameters […]

[Summary] The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

Paper: Frankle, J., & Carbin, M. (2018). The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635. Disclaimer: Most of the following is extracted from the paper and edited by me to allow the summarization of key points. One line summary: Prune weights with small magnitudes from a big network trained on a […]

[Summary] Pretrained Transformers as Universal Computation Engines

Paper: Lu, K., Grover, A., Abbeel, P., & Mordatch, I. (2021). Pretrained Transformers as Universal Computation Engines. arXiv preprint arXiv:2103.05247. Disclaimer: Most of the following is extracted from the paper and edited by me to allow the summarization of key points. One line summary: It is possible to do transfer learning to different modalities with transformers […]

[Summary] Vision Transformers (ViT) – An image is Worth 16×16 Words: Transformers for Image Recognition at Scale

Paper: Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., … & Houlsby, N. (2020). An image is worth 16×16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Disclaimer: Most of the following is extracted from the paper and edited by me to allow the summarization of key points. One line […]