practical_natural_language_processing_references

Practical Natural Language Processing References

Return to Practical Natural Language Processing, Python NLP, NLP, NLP bibliography, Python AI - AI bibliography, Python DL - Machine Learning (ML) bibliography, Python ML - Deep Learning (DL) bibliography, Python Data science - Data Science bibliography

“ (PrctNLP 2020)

[1] ONNX: An open format built to represent machine learning models. Last accessed June 15, 2020.

[2] Apache Airflow. Last accessed June 15, 2020.

[3] Apache Oozie. Last accessed June 15, 2020.

[4] Chef. Last accessed June 15, 2020.

[5] Microsoft. “MLOps examples”. Last accessed June 15, 2020.

[6] Microsoft. MLOps using Azure ML Services and Azure DevOps, (GitHub repo). Last accessed June 15, 2020.

[7] Elastic. “Anomaly Detection”.

[8] Krzus, Matt and and Jason Berkowitz. “Text Classification with Gluon on Amazon SageMaker and AWS Batch”. AWS Machine Learning Blog, March 20, 2018.

[9] The Pallets Projects. “Flask”. Last accessed June 15, 2020.

[10] The Falcon Web Framework. Last accessed June 15, 2020.

[11] Django: The web framework for perfectionists with deadlines. Last accessed June 15, 2020.

[12] Docker. Last accessed June 15, 2020.

[13] Kubernetes: Production-Grade Container Orchestration. Last accessed June 15, 2020.

[14] Amazon. AWS SageMaker. Last accessed June 15, 2020.

[15] Microsoft. Azure Cognitive Services. Last accessed June 15, 2020.

[16] Sucik, Sam. “Compressing BERT for Faster Prediction”. Rasa (blog), August 8, 2019.

[17] Cheng, Yu, Duo Wang, Pan Zhou, and Tao Zhang. “A Survey of Model Compression and Acceleration for Deep Neural Networks.” 2017.

[18] Joulin, Armand, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, and Tomas Mikolov. “FastText.zip: Compressing Text Classification Models”, 2016.

[19] Chee, Cedric. Awesome machine learning model compression research papers, tools, and learning material, (GitHub repo). Last accessed June 15, 2020.

[20] Burkov, Andriy. Machine Learning Engineering (Draft). 2019.

[21] Cheng, Heng-Tze. “Wide & Deep Learning: Better Together with TensorFlow.” Google AI Blog, June 29, 2016.

[22] Zheng, Alice and Amanda Casari. Feature Engineering for Machine Learning. Boston: O'Reilly, 2018. ISBN: 978-9-35213-711-4

[23] DVC: Open source version control system for machine learning projects. Last accessed June 15, 2020.

[24] Gundersen, Odd Erik and Sigbjørn Kjensmo. “State of the Art: Reproducibility in Artificial Intelligence.” The Thirty-Second AAAI Conference on Artificial Intelligence (2018).

[25] Gibney, E. “This AI Researcher Is Trying to Ward Off a Reproducibility Crisis.” Nature 577.7788 (2020): 14.

[26] TensorFlow. “Getting Started with TensorFlow Model Analysis”. Last accessed June 15, 2020.

[27] Marco Tulio Correia Ribeiro. Lime: Explaining the predictions of any machine learning classifier, (GitHub repo). Last accessed June 15, 2020.

[28] Lundberg, Scott. Shap: A game theoretic approach to explain the output of any machine learning model, (GitHub repo). Last accessed June 15, 2020.

[29] TensorFlow. “Get started with TensorFlow Data Validation”. Last accessed June 15, 2020.

[30] Miller, Tim. “Explanation in Artificial Intelligence: Insights from the Social Sciences”, (2017).

[31] Molnar, Christoph. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. 2019.

[32] Sumo Logic. “Outlier”. Last accessed June 15, 2020.

[33] Microsoft. “Anomaly Detector API Documentation”. Last accessed June 15, 2020.

[34] Domingos, Pedro. “A Few Useful Things to Know about Machine Learning.” Communications of the ACM 55.10(2012): 78–87.

[35] Sculley, D., Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, and Michael Young. “Machine Learning: The High Interest Credit Card of Technical Debt.” SE4ML: Software Engineering for Machine Learning (NIPS 2014 Workshop).

[36] D. Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, VinayChaudhary, Michael Young, Jean-Francois Crespo, and Dan Dennison. “Hidden Technical Debt in Machine Learning Systems.” Proceedings of the 28th International Conference on Neural Information Processing Systems 2 (2015): 2503–2511.

[37] McMahan, H. Brendan, Gary Holt, David Sculley, Michael Young, Dietmar Ebner, Julian Grady, Lan Nie et al. “Ad Click Prediction: A View from the Trenches.” Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2013): 1222–1230.

[38] Zinkevich, Martin. “Rules of Machine Learning: Best Practices for ML Engineering”. Google Machine Learning. Last accessed June 15, 2020.

[39] Halevy, Alon, Peter Norvig, and Fernando Pereira. “The Unreasonable Effectiveness of Data.” IEEE Intelligent Systems 24.2 (2009): 8–12.

[40] Sun, Chen, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. “Revisiting Unreasonable Effectiveness of Data in Deep Learning Era.” Proceedings of the IEEE International Conference on Computer Vision (2017): 843–852.

[41] Petrov, Slav. “Announcing SyntaxNet: The World’s Most Accurate Parser Goes Open Source”. Google AI Blog, May 12, 2016.

[42] Marcus, Mitchell, Beatrice Santorini, and Mary Ann Marcinkiewicz. “Building a Large Annotated Corpus of English: The Penn Treebank”. Computational Linguistics 19, Number 2, Special Issue on Using Large Corpora: II (June 1993).

[43] Feurer, Matthias, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, and Frank Hutter. “Efficient and Robust Automated Machine Learning.” Advances in Neural Information Processing Systems 28 (2015): 2962–2970.

[44] Le Cun, Yann, Corinna Cortes and Christopher J.C. Burges. “The MNIST database of handwritten digits”. Last accessed June 15, 2020.

[45] Google Cloud. “Features and capabilities of AutoML Natural Language”. Last accessed June 15, 2020.

[46] Google Cloud. “AutoML Translation”. Last accessed June 15, 2020.

[47] Microsoft Azure. “What is automated machine learning (AutoML)?”, February 28, 2020.

[48] Thakur, Abhishek and Artus Krohn-Grimberghe. “AutoCompete: A Framework for Machine Learning Competition”, (2015).

[49] Thakur, Abhishek. “Approaching (Almost) Any NLP Problem on Kaggle”. Last accessed June 15, 2020.

[50] Fayyad, Usama, Gregory Piatetsky-Shapiro, and Padhraic Smyth. “The KDD Process for Extracting Useful Knowledge from Volumes of Data.” Communications of the ACM 39.11 (1996): 27–34.

[51] Microsoft Azure. “What is the Team Data Science Process?”, January 10, 2020.

[52] Microsoft. “Team Data Science Process Documentation”. Last accessed June 15, 2020.

[53] Kidd, Chrissy. “Why Does Gartner Predict up to 85% of AI Projects Will ‘Not Deliver’ for CIOs?”, BMC Machine Learning & Big Data Blog, December 18, 2018.

[54] Google AI. “Responsible AI Practices”. Last accessed June 15, 2020.

[55] Microsoft. “Microsoft AI principles”. Last accessed June 15, 2020.

[56] Artstein, Ron and Massimo Poesio. “Inter-Coder Agreement for Computational Linguistics.” Computational Linguistics 34.4 (2008): 555–596.

[57] Adiwardana, Daniel and Thang Luong. “Towards a Conversational Agent that Can Chat About…Anything”. Google AI Blog, January 28, 2020.

[58] Enam, S. Zayd. “Why is Machine LearningHard’?”, Zayd’s Blog, November 10, 2016.

[59] Karpathy, Andrej. “Software 2.0”. Medium Programming, November 11, 2017.

[60] Heinzerling, Benjamin. “NLP’s Clever Hans Moment has Arrived”. The Gradient, August 26, 2019.

[61] Raji, Inioluwa Deborah, Andrew Smart, Rebecca N. White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. “Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing”, (2020).

[62] Rao, Delip. “The Twelve Truths of Machine Learning for the Real World”. Delip Rao (blog), December 25, 2019.

[63] Shenfeld, David. “What I’ve Learned Working with 12 Machine Learning Startups”. Towards Data Science (blog), May 6, 2019.

[64] Snow, Charles Percy. The Two Cultures and the Scientific Revolution. Connecticut: Martino Fine Books, 2013.

[65] Chollet, François. “On The Measure of Intelligence”, (2019).

[66] John, Raven J. “Raven Progressive Matrices,” in Handbook of Nonverbal Assessment, Boston: Springer, 2003.

[67] Wadhwani AI. “Maternal, Newborn, and Child Health”. Last accessed June 15, 2020.

[68] Matias, Yossi. “Keeping People Safe with AI-Enabled Flood Forecasting”. Google The Keyword (blog), September 24, 2018.

[69] Microsoft. “AI for Good”. Last accessed June 15, 2020.

[70] Sakaguchi, Keisuke, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. “WinoGrande: An Adversarial Winograd Schema Challenge at Scale”, (2019).

[71] Cam, Arif, Michael Chui, and Bryce Hall. “Global AI Survey: AI Proves Its Worth, but Few Scale Impact”. McKinsey & Company Featured Insights, November 2019.

[72] Ransbotham, Sam, Philipp Gerbert, Martin Reeves, David Kiron, and Michael Spira. “Artificial Intelligence in Business Gets Real.” MIT Sloan Management Review (September 2018).

[73] Casado, Martin and Matt Bornstein. “The New Business of AI (and How It’s Different From Traditional Software)”. Andreesen Horowitz, February 16, 2020.

Fair Use Sources

Natural Language Processing (NLP): What Is Language, Text classification, Language modeling,

Machine Learning for NLP NLP ML, NLP DL - NLP Deep learning - Python NLP, NLP MLOps, Python NLP (sci-kit NLP, OpenCV NLP, TensorFlow NLP, PyTorch NLP, Keras NLP, NumPy NLP, NLTK NLP, SciPy NLP, sci-kit learn NLP, Seaborn NLP, Matplotlib NLP), C++ NLP, C# NLP, Golang NLP, Java NLP, JavaScript NLP, Julia NLP, Kotlin NLP, R NLP, Ruby NLP, Rust NLP, Scala NLP, Swift NLP, NLP history, NLP bibliography, NLP glossary, NLP topics, NLP courses, NLP libraries, NLP frameworks, NLP GitHub, NLP Awesome list. (navbar_nlp - See also navbar_dl, navbar_ml, navbar_chatgpt, navbar_ai)

Artificial Intelligence (AI): AI Fundamentals, AI Inventor: Arthur Samuel of IBM 1959 coined term Machine Learning. Synonym Self-Teaching Computers from 1950s. Experimental AILearning Machine” called Cybertron in early 1960s by Raytheon Company; ChatGPT, Generative AI, NLP, GAN, AI winter, The Singularity, AI FUD, Quantum FUD (Fake Quantum Computers), AI Propaganda, Quantum Propaganda, Cloud AI (AWS AI, Azure AI, Google AI-GCP AI-Google Cloud AI, IBM AI, Apple AI), Deep Learning (DL), Machine learning (ML), AI History, AI Bibliography, Manning AI-ML-DL-NLP-GAN Series, AI Glossary, AI Topics, AI Courses, AI Libraries, AI frameworks, AI GitHub, AI Awesome List. (navbar_ai - See also navbar_dl, navbar_ml, navbar_nlp, navbar_chatgpt)


Cloud Monk is Retired (for now). Buddha with you. © 2005 - 2024 Losang Jinpa or Fair Use. Disclaimers

SYI LU SENG E MU CHYWE YE. NAN. WEI LA YE. WEI LA YE. SA WA HE.


practical_natural_language_processing_references.txt · Last modified: 2022/05/27 02:43 by 127.0.0.1