[{"content":"These figures come from a programming languages / compiler design textbook I was reading for a course. I kept the photos as a single reference page: how compilation is staged, how today’s languages relate historically, and where syntax stops being enough — you need precedence rules, disambiguation for things like else, and attributes for semantics such as types.\nCompiler pipeline The usual story: source program → lexical analyzer (tokens) → syntax analyzer (parse trees) → intermediate code (with semantic analysis) → optional optimization → code generator → machine code on a computer, plus input data and results. A symbol table sits beside the front phases and feeds the back end.\nOptimization is optional in the diagram for a reason: teaching compilers and fast edit-compile-run loops often skip heavy optimization; shipping compilers for production workloads usually invest there, often on intermediate code where transformations are easier than at the machine-instruction level.\nGenealogy of high-level languages The second figure is a timeline graph: languages as nodes, arrows as influence or direct descent. You can read off major threads — Fortran, ALGOL as a hub, C as another hub feeding C++, Java, C#, Perl / PHP / JavaScript, LISP → Scheme → ML → Haskell, BASIC toward Visual Basic, plus COBOL, Ada, Prolog, and others.\nIt is a reminder that “picking a language” is also inheriting syntax habits, runtime models, and communities shaped by decades of prior design.\nParse trees and one unambiguous assignment For the statement A = B * (A + C), a parse tree makes the grouping explicit: the + sits under the parenthesized subexpression, and * combines B with that whole subexpression before assignment. The tree is the artifact the syntax analyzer hands to later phases.\nAmbiguity: two trees for A = B + C * A The textbook’s classic contrast: the same token string can correspond to two different trees if the grammar allows it. One tree groups like B + (C * A) (multiplication tighter than addition); the other like (B + C) * A. Only the first matches ordinary arithmetic precedence.\nReal grammars fix this by stratifying expressions (separate nonterminals for terms and factors, or precedence declarations in parser generators) so the parser cannot build the wrong shape.\nDangling else A second ambiguity class is structural, not arithmetic: nested if with a single else. One parse attaches else to the outer if; the other to the inner if. Languages adopt a concrete rule (most often: else binds to the nearest if) and/or require braces or end if markers so the intent is syntactically unique.\nAttribute grammars: types flow through the tree Syntax alone does not carry types or meaning. Attribute grammars decorate the tree: synthesized attributes bubble up (for example, the actual type of an expression from its leaves), and inherited attributes push context down (for example, the expected type from the left-hand side of an assignment). The book’s example with A = A + B shows expected_type on the expression and actual_type on each var, which is exactly how a compiler justifies coercions or reports errors.\nIf you are revisiting the same chapters: the through-line is pipeline first, then history for context, then formal syntax (trees), then where syntax breaks (ambiguity), then how semantics attach (attributes). These scans are my own study aid; the original figures and prose belong to the respective textbook and publisher.\nRelated: course notes on the software life cycle — GOALS, waterfall, verification vs validation.","date":"2026-04-13","date_unix":1776109200,"id":"https://antoineboucher.info/CV/blog/posts/compiler-design-textbook-figures/","permalink":"https://antoineboucher.info/CV/blog/posts/compiler-design-textbook-figures/","post_kind":"article","section":"posts","summary":"Study notes from a programming-languages text — compiler stages, a timeline of major languages, parse trees, ambiguity (operators and dangling else), and attribute grammars for types.","tag_refs":[{"name":"Compilers","permalink":"https://antoineboucher.info/CV/blog/tags/compilers/"},{"name":"Programming Languages","permalink":"https://antoineboucher.info/CV/blog/tags/programming-languages/"},{"name":"Parsing","permalink":"https://antoineboucher.info/CV/blog/tags/parsing/"},{"name":"Computer Science","permalink":"https://antoineboucher.info/CV/blog/tags/computer-science/"},{"name":"Education","permalink":"https://antoineboucher.info/CV/blog/tags/education/"}],"tags":["Compilers","Programming Languages","Parsing","Computer Science","Education"],"tags_text":"Compilers Programming Languages Parsing Computer Science Education","thumb":"https://antoineboucher.info/CV/blog/posts/compiler-design-textbook-figures/images/ambiguity-operator-precedence_hu_15216295c2ec2aca.png","title":"Compiler pipeline, language genealogy, and why grammars matter"},{"content":"This is a single walkthrough of a movie similarity thread: Part 1 stores embeddings in PostgreSQL + pgvector and runs nearest-neighbor search in SQL; Part 2 uses Qdrant with MovieLens (dense text vectors for semantic search and sparse rating vectors for collaborative-style recommendations); Part 3 turns the same pgvector-backed catalog into the retrieval layer for a small RAG pipeline with LangChain and Ollama. Below are short GIFs from that work (movie-similarities-1.gif … 3.gif in this page bundle).\nVisualizations Part 1 — pgvector / SQL: exploring similar movies from embeddings and distance metrics.\nPart 2 — Qdrant + MovieLens: dense movie search or sparse user–rating neighborhoods (depending on your recording).\nPart 3 — Grounded Q\u0026amp;A: question → retrieve rows → LLM answer tied to your catalog.\nResources GitHub (course / notebooks): AlgoETS/SimilityVectorEmbedding — includes postgres/3.LLMS.ipynb for Part 3 Medium (original pgvector article): Using vector databases to find similar movies (Part 1) Discord: discord.gg/Mgf6STuvzZ Part 1 — PostgreSQL, pgvector, and similar movies This project demonstrates how embeddings and a vector database (PostgreSQL with pgvector) support similarity search over movie descriptions and metadata, using NLP models to encode text and compare titles in vector space.\nUnderstanding vector querying and cosine similarity Vector querying with pgvector Pgvector is a PostgreSQL extension that facilitates efficient storage and querying of high-dimensional vectors. In this project, we leverage pgvector to handle vector data derived from movie embeddings. These embeddings represent the semantic content of movie descriptions and metadata, allowing for advanced querying capabilities like nearest neighbor searches.\nPgvector supports several distance metrics, including cosine similarity (denoted as \u0026lt;=\u0026gt; in SQL). By utilizing this function, we can perform fast cosine distance calculations directly within SQL queries, which is critical for efficient similarity searches. Here\u0026rsquo;s how you can find similar movies based on cosine similarity:\nSELECT title, embedding\nFROM movies\nORDER BY embedding \u0026lt;=\u0026gt; (SELECT embedding FROM movies WHERE title = %s) ASC\nLIMIT 10;\nCosine Similarity Cosine similarity measures the cosine of the angle between two vectors. This metric is widely used in NLP to assess how similar two documents (or in this case, movie descriptions) are irrespective of their size.\nCosine Similarity = (A · B) / (|A| |B|)\nOther Distance Functions Supported by pgvector Pgvector also supports other distance metrics such as L2 (Euclidean), L1 (Manhattan), and Dot Product. Each of these metrics can be selected based on the specific needs of your query or the characteristics of your data. Here’s how you might use these metrics:\nL2 Distance (Euclidean): Suitable for measuring the absolute differences between vectors. L1 Distance (Manhattan): Useful in high-dimensional data spaces. Installation Install all required libraries and dependencies:\npip install transformers psycopg2 numpy boto3 torch scikit-learn matplotlib nltk sentence-transformers\nDatabase Setup #!/bin/bash\n# Install pgvector\ngit clone \u0026ndash;branch v0.7.0 https://github.com/pgvector/pgvector.git\ncd pgvector\ndocker build \u0026ndash;build-arg PG_MAJOR=16 -t builder/pgvector .\ncd ..\ndocker-compose up -d\n# ollama\ncurl -fsSL https://ollama.com/install.sh | sh\nollama pull bakllava\nollama pull llama2:13b-chat\nversion: \u0026lsquo;3.8\u0026rsquo;\nservices:\npostgres:\nimage: builder/pgvector\nenvironment:\nPOSTGRES_USER: admin\nPOSTGRES_PASSWORD: admin\nPOSTGRES_DB: admin\nports:\n- \u0026ldquo;5432:5432\u0026rdquo;\nvolumes:\n- ./data:/var/lib/postgresql/data\nExample Movie Entry Here is an example of how a movie is represented in the movies.json file:\n{\n\u0026ldquo;titre\u0026rdquo;: \u0026ldquo;George of the Jungle\u0026rdquo;,\n\u0026ldquo;annee\u0026rdquo;: \u0026ldquo;1997\u0026rdquo;,\n\u0026ldquo;pays\u0026rdquo;: \u0026ldquo;USA\u0026rdquo;,\n\u0026ldquo;langue\u0026rdquo;: \u0026ldquo;English\u0026rdquo;,\n\u0026ldquo;duree\u0026rdquo;: \u0026ldquo;92\u0026rdquo;,\n\u0026ldquo;resume\u0026rdquo;: \u0026ldquo;George grows up in the jungle raised by apes. Based on the Cartoon series.\u0026rdquo;,\n\u0026ldquo;genre\u0026rdquo;: [\u0026ldquo;Action\u0026rdquo;, \u0026ldquo;Adventure\u0026rdquo;, \u0026ldquo;Comedy\u0026rdquo;, \u0026ldquo;Family\u0026rdquo;, \u0026ldquo;Romance\u0026rdquo;],\n\u0026ldquo;realisateur\u0026rdquo;: {\u0026quot;_id\u0026quot;: \u0026ldquo;918873\u0026rdquo;, \u0026ldquo;__text\u0026rdquo;: \u0026ldquo;Sam Weisman\u0026rdquo;},\n\u0026ldquo;scenariste\u0026rdquo;: [\u0026ldquo;Jay Ward\u0026rdquo;, \u0026ldquo;Dana Olsen\u0026rdquo;],\n\u0026ldquo;role\u0026rdquo;: [\n{\u0026ldquo;acteur\u0026rdquo;: {\u0026quot;_id\u0026quot;: \u0026ldquo;409\u0026rdquo;, \u0026ldquo;__text\u0026rdquo;: \u0026ldquo;Brendan Fraser\u0026rdquo;}, \u0026ldquo;personnage\u0026rdquo;: \u0026ldquo;George of the Jungle\u0026rdquo;},\n{\u0026ldquo;acteur\u0026rdquo;: {\u0026quot;_id\u0026quot;: \u0026ldquo;5182\u0026rdquo;, \u0026ldquo;__text\u0026rdquo;: \u0026ldquo;Leslie Mann\u0026rdquo;}, \u0026ldquo;personnage\u0026rdquo;: \u0026ldquo;Ursula Stanhope\u0026rdquo;}\n],\n\u0026ldquo;poster\u0026rdquo;: \u0026ldquo;https://m.media-amazon.com/images/M/MV5BNTdiM2VjYjYtZjEwNS00ZWU5LWFkZGYtZGYxMDcwMzY1OTEzL2ltYWdlL2ltYWdlXkEyXkFqcGdeQXVyMTczNjQwOTY@._V1_SY150_CR0,0,101,150_.jpg\u0026rdquo;,\n\u0026ldquo;_id\u0026rdquo;: \u0026ldquo;119190\u0026rdquo;\n}\nWorking with Embeddings Embeddings are generated using models like BERT or Sentence Transformers and are utilized within pgvector to perform fast and efficient cosine similarity searches.\nGenerating Embeddings Define the models and generate embeddings for the movie data:\nmodels = {\n\u0026ldquo;bart\u0026rdquo;: {\n\u0026ldquo;model_name\u0026rdquo;: \u0026ldquo;facebook/bart-large\u0026rdquo;,\n\u0026ldquo;tokenizer\u0026rdquo;: AutoTokenizer.from_pretrained(\u0026ldquo;facebook/bart-large\u0026rdquo;, trust_remote_code=True),\n\u0026ldquo;model\u0026rdquo;: AutoModel.from_pretrained(\u0026ldquo;facebook/bart-large\u0026rdquo;, trust_remote_code=True)\n},\n\u0026ldquo;gte\u0026rdquo;: {\n\u0026ldquo;model_name\u0026rdquo;: \u0026ldquo;Alibaba-NLP/gte-large-en-v1.5\u0026rdquo;,\n\u0026ldquo;tokenizer\u0026rdquo;: AutoTokenizer.from_pretrained(\u0026ldquo;Alibaba-NLP/gte-large-en-v1.5\u0026rdquo;, trust_remote_code=True),\n\u0026ldquo;model\u0026rdquo;: AutoModel.from_pretrained(\u0026ldquo;Alibaba-NLP/gte-large-en-v1.5\u0026rdquo;, trust_remote_code=True)\n},\n\u0026ldquo;MiniLM\u0026rdquo;: {\n\u0026ldquo;model_name\u0026rdquo;: \u0026lsquo;all-MiniLM-L12-v2\u0026rsquo;,\n\u0026ldquo;model\u0026rdquo;: SentenceTransformer(\u0026lsquo;all-MiniLM-L12-v2\u0026rsquo;)\n},\n\u0026ldquo;roberta\u0026rdquo;: {\n\u0026ldquo;model_name\u0026rdquo;: \u0026lsquo;sentence-transformers/nli-roberta-large\u0026rsquo;,\n\u0026ldquo;model\u0026rdquo;: SentenceTransformer(\u0026lsquo;sentence-transformers/nli-roberta-large\u0026rsquo;)\n},\n\u0026ldquo;e5-large\u0026rdquo;:{\n\u0026ldquo;model_name\u0026rdquo;: \u0026lsquo;intfloat/e5-large\u0026rsquo;,\n\u0026ldquo;tokenizer\u0026rdquo;: AutoTokenizer.from_pretrained(\u0026lsquo;intfloat/e5-large\u0026rsquo;, trust_remote_code=True),\n\u0026ldquo;model\u0026rdquo;: AutoModel.from_pretrained(\u0026lsquo;intfloat/e5-large\u0026rsquo;, trust_remote_code=True)\n}\n}\nTest Cosine Similarity with Embeddings # Example sentences\nsentences_test = [\u0026ldquo;This is a fox.\u0026rdquo;, \u0026ldquo;This is a dog.\u0026rdquo;, \u0026ldquo;This is a cat.\u0026rdquo;, \u0026ldquo;This is a fox.\u0026rdquo;]\n# Generate embeddings\nembeddings_test = models[\u0026ldquo;MiniLM\u0026rdquo;][\u0026ldquo;model\u0026rdquo;].encode(sentences_test)\n# Calculate cosine similarity\ncosine_similarity = np.dot(embeddings_test[0], embeddings_test[1]) / (np.linalg.norm(embeddings_test[0]) * np.linalg.norm(embeddings_test[1]))\nprint(\u0026ldquo;Cosine Similarity:\u0026rdquo;, cosine_similarity)\ncosine_similarity = np.dot(embeddings_test[0], embeddings_test[3]) / (np.linalg.norm(embeddings_test[0]) * np.linalg.norm(embeddings_test[3]))\nprint(\u0026ldquo;Cosine Similarity Same:\u0026rdquo;, cosine_similarity)\nCosine Similarity: 0.46493083\nCosine Similarity Same: 1.0\nRemove stopwords to reduce noise import nltk\nfrom nltk.corpus import stopwords\nnltk.download(‘stopwords’)\nDefine a list of movie titles current_directory = os.getcwd()\nwith open(os.path.join(current_directory, \u0026ldquo;movies.json\u0026rdquo;), \u0026ldquo;r\u0026rdquo;) as f:\nmovies = json.load(f)\nmovies_data = []\nfor movie in movies[\u0026ldquo;films\u0026rdquo;][\u0026ldquo;film\u0026rdquo;]:\nroles = movie.get(\u0026quot;role\u0026quot;, \\[\\]) if isinstance(roles, dict): # If 'roles' is a dictionary, make it a single-item list roles = \\[roles\\] \\# Extract actor information actors = \\[\\] for role in roles: actor\\_info = role.get(\u0026quot;acteur\u0026quot;, {}) if \u0026quot;\\_\\_text\u0026quot; in actor\\_info: actors.append(actor\\_info\\[\u0026quot;\\_\\_text\u0026quot;\\]) movies\\_data.append({ \u0026quot;title\u0026quot;: movie.get(\u0026quot;titre\u0026quot;, \u0026quot;\u0026quot;), \u0026quot;year\u0026quot;: movie.get(\u0026quot;annee\u0026quot;, \u0026quot;\u0026quot;), \u0026quot;country\u0026quot;: movie.get(\u0026quot;pays\u0026quot;, \u0026quot;\u0026quot;), \u0026quot;language\u0026quot;: movie.get(\u0026quot;langue\u0026quot;, \u0026quot;\u0026quot;), \u0026quot;duration\u0026quot;: movie.get(\u0026quot;duree\u0026quot;, \u0026quot;\u0026quot;), \u0026quot;summary\u0026quot;: movie.get(\u0026quot;synopsis\u0026quot;, \u0026quot;\u0026quot;), \u0026quot;genre\u0026quot;: movie.get(\u0026quot;genre\u0026quot;, \u0026quot;\u0026quot;), \u0026quot;director\u0026quot;: movie.get(\u0026quot;realisateur\u0026quot;, {\u0026quot;\\_\\_text\u0026quot;: \u0026quot;\u0026quot;}).get(\u0026quot;\\_\\_text\u0026quot;, \u0026quot;\u0026quot;), \u0026quot;writers\u0026quot;: movie.get(\u0026quot;scenariste\u0026quot;, \\[\\]), \u0026quot;actors\u0026quot;: actors, \u0026quot;poster\u0026quot;: movie.get(\u0026quot;affiche\u0026quot;, \u0026quot;\u0026quot;), \u0026quot;id\u0026quot;: movie.get(\u0026quot;id\u0026quot;, \u0026quot;\u0026quot;) }) Generate embeddings for movies def preprocess(text):\n# Example preprocessing step simplified for demonstration\ntokens = text.split()\n# Assuming stopwords are already loaded to avoid loading them in each process\nstopwords_set = set(stopwords.words(\u0026rsquo;english\u0026rsquo;))\ntokens = [word for word in tokens if word.lower() not in stopwords_set]\nreturn \u0026rsquo; \u0026lsquo;.join(tokens)\ndef normalize_embeddings(embeddings):\n\u0026quot;\u0026quot;\u0026quot; Normalize the embeddings to unit vectors. \u0026quot;\u0026quot;\u0026quot;\nnorms = np.linalg.norm(embeddings, axis=1, keepdims=True)\nnormalized_embeddings = embeddings / norms\nreturn normalized_embeddings\ndef generate_embedding(movies_data, model_key, normalize=True):\nmodel_config = models[model_key]\nif \u0026rsquo;tokenizer\u0026rsquo; in model_config:\n# Handle HuggingFace transformer models\nmovie_texts = [\nf\u0026quot;{preprocess(movie[\u0026rsquo;title\u0026rsquo;])} {movie[\u0026lsquo;year\u0026rsquo;]} {\u0026rsquo; \u0026lsquo;.join(movie[\u0026lsquo;genre\u0026rsquo;])} \u0026quot;\nf\u0026quot;{\u0026rsquo; \u0026lsquo;.join(movie[\u0026lsquo;actors\u0026rsquo;])} {movie[\u0026lsquo;director\u0026rsquo;]} \u0026quot;\nf\u0026quot;{preprocess(movie[\u0026lsquo;summary\u0026rsquo;])} {movie[\u0026lsquo;country\u0026rsquo;]}\u0026quot;\nfor movie in movies_data\n]\ninputs = model_config[\u0026rsquo;tokenizer\u0026rsquo;](movie_texts, padding=True, truncation=True, return_tensors=\u0026ldquo;pt\u0026rdquo;)\nwith torch.no_grad():\noutputs = model_config[\u0026lsquo;model\u0026rsquo;](**inputs)\nembeddings = outputs.last_hidden_state.mean(dim=1).numpy()\nelse:\n# Handle Sentence Transformers\nmovie_texts = [\nf\u0026quot;{preprocess(movie[\u0026rsquo;title\u0026rsquo;])} {movie[\u0026lsquo;year\u0026rsquo;]} {\u0026rsquo; \u0026lsquo;.join(movie[\u0026lsquo;genre\u0026rsquo;])} \u0026quot;\nf\u0026quot;{\u0026rsquo; \u0026lsquo;.join(movie[\u0026lsquo;actors\u0026rsquo;])} {movie[\u0026lsquo;director\u0026rsquo;]} \u0026quot;\nf\u0026quot;{preprocess(movie[\u0026lsquo;summary\u0026rsquo;])} {movie[\u0026lsquo;country\u0026rsquo;]}\u0026quot;\nfor movie in movies_data\n]\nembeddings = model_config[\u0026lsquo;model\u0026rsquo;].encode(movie_texts)\nif normalize: embeddings = normalize\\_embeddings(embeddings) return embeddings embeddings_MiniLM = generate_embedding(movies_data, \u0026lsquo;MiniLM\u0026rsquo;)\nembeddings_MiniLM = np.array(embeddings_MiniLM)\nprint(\u0026ldquo;MiniLM embeddings shape:\u0026rdquo;, embeddings_MiniLM.shape)\nprint(\u0026ldquo;MiniLM embeddings:\u0026rdquo;, embeddings_MiniLM[0])\nCreate connection to the database conn = psycopg2.connect(database=”admin”, host=”localhost”, user=”admin”, password=”admin”, port=”5432\u0026quot;)\ncur = conn.cursor()\ncur.execute(\u0026ldquo;CREATE EXTENSION IF NOT EXISTS vector;\u0026rdquo;)\nconn.commit()\ncur.execute(\u0026ldquo;CREATE EXTENSION IF NOT EXISTS cube;\u0026rdquo;)\nconn.commit()\nInserting Data into the Database Insert movie titles and their embeddings into the movies table:\ndef setup_database():\ncur.execute(\u0026lsquo;DROP TABLE IF EXISTS movies\u0026rsquo;)\ncur.execute(\u0026rsquo;\u0026rsquo;\u0026rsquo;\nCREATE TABLE movies (\nid SERIAL PRIMARY KEY,\ntitle TEXT NOT NULL,\nactors TEXT,\nyear INTEGER,\ncountry TEXT,\nlanguage TEXT,\nduration INTEGER,\nsummary TEXT,\ngenre TEXT[],\ndirector TEXT,\nscenarists TEXT[],\nposter TEXT,\nembedding_bart VECTOR(1024),\nembedding_gte V","date":"2026-04-13","date_unix":1776096e3,"id":"https://antoineboucher.info/CV/blog/posts/vector-databases-similar-movies/","permalink":"https://antoineboucher.info/CV/blog/posts/vector-databases-similar-movies/","post_kind":"article","section":"posts","summary":"Movie similarity with pgvector and SQL, Qdrant with MovieLens dense and sparse vectors, and LangChain + Ollama RAG over the same catalog—embeddings, kNN, and grounded answers.","tag_refs":[{"name":"PostgreSQL","permalink":"https://antoineboucher.info/CV/blog/tags/postgresql/"},{"name":"Pgvector","permalink":"https://antoineboucher.info/CV/blog/tags/pgvector/"},{"name":"Qdrant","permalink":"https://antoineboucher.info/CV/blog/tags/qdrant/"},{"name":"Python","permalink":"https://antoineboucher.info/CV/blog/tags/python/"},{"name":"Embeddings","permalink":"https://antoineboucher.info/CV/blog/tags/embeddings/"},{"name":"RAG","permalink":"https://antoineboucher.info/CV/blog/tags/rag/"},{"name":"Machine Learning","permalink":"https://antoineboucher.info/CV/blog/tags/machine-learning/"}],"tags":["PostgreSQL","pgvector","Qdrant","Python","Embeddings","RAG","Machine Learning"],"tags_text":"PostgreSQL pgvector Qdrant Python Embeddings RAG Machine Learning","thumb":"https://antoineboucher.info/CV/blog/posts/vector-databases-similar-movies/featured_hu_b86049bbe2a8d095.png","title":"Exploring movie similarities with vector search algorithms"},{"content":"What it is Dimension is a monorepo I use to experiment with numerical code in Rust. The piece that holds most of the public API is mathlib: a linear-algebra-focused crate with dense and sparse matrices, vectors, standard decompositions (SVD, Cholesky, LU, PCA), solvers, and a large set of building blocks for graphics-style math (3D types, quaternions, dual quaternions, cameras, easing, curves).\nThe repo also pulls in related crates for simulation and tooling—kinematics, physics, geometry, rendering demos, neural bits, and a small site that packages some of the WASM demos—so mathlib stays the shared numeric core while other folders explore how that core feels in real programs.\nWhy Rust here Rust gives a single codebase that can target native binaries, tests, and benchmarks, then compile the same numerics to WebAssembly for an interactive demo layer. mathlib is organized by domain (linear, structure, ML-style clustering and distances, graphs and pathfinding, transforms, noise, and so on), which keeps features discoverable as the crate grows.\nOptional SIMD and WebGPU/wgpu paths exist for heavier workloads; the design doc in the repo spells out when iterative sparse solvers (for example conjugate gradient on CRS matrices) are preferable to turning a sparse problem into a dense factorization.\nTry it From the repo, the usual loop is:\ncd mathlib \u0026amp;\u0026amp; cargo build cd mathlib \u0026amp;\u0026amp; cargo test Published API reference: docs.rs/mathlib. For architecture, type tables, and usage notes, the repo’s docs/DOCS.md is the long-form map.\nClosing Dimension is as much a workspace for learning and benchmarking as it is a library. If you care about Rust numerics, WASM demos, or GPU-backed primitives, the repository is the place to watch; pull requests and issues are welcome on GitHub.","date":"2026-04-13","date_unix":1776088800,"id":"https://antoineboucher.info/CV/blog/posts/dimension-mathlib-rust/","permalink":"https://antoineboucher.info/CV/blog/posts/dimension-mathlib-rust/","post_kind":"article","section":"posts","summary":"Notes on the Dimension monorepo and its mathlib crate — linear algebra, sparse matrices, WASM demos, and optional GPU paths.","tag_refs":[{"name":"Rust","permalink":"https://antoineboucher.info/CV/blog/tags/rust/"},{"name":"Linear Algebra","permalink":"https://antoineboucher.info/CV/blog/tags/linear-algebra/"},{"name":"WebAssembly","permalink":"https://antoineboucher.info/CV/blog/tags/webassembly/"},{"name":"Scientific Computing","permalink":"https://antoineboucher.info/CV/blog/tags/scientific-computing/"},{"name":"Open Source","permalink":"https://antoineboucher.info/CV/blog/tags/open-source/"}],"tags":["Rust","Linear Algebra","WebAssembly","Scientific Computing","Open Source"],"tags_text":"Rust Linear Algebra WebAssembly Scientific Computing Open Source","thumb":"/CV/blog/images/post-kind-article.png","title":"Dimension — a Rust math stack around mathlib"},{"content":"Full article in French — same slug; you can also switch to FR in the header.\nAt a glance The Business Model Canvas (BMC) describes how an organization creates, delivers, and captures value; it answers three questions: desirability, feasibility, and viability. You usually start with customer segments and value proposition, then iterate—the model evolves with the market. Segmentation makes the market concrete (B2B vs B2C, crisp criteria) instead of vague labels (“doctors”, “parents”). Interviews are for discovery: the goal is to learn, not to sell; connect, don’t convince—and stay attached to the problem, not your first idea of the solution. Structured feedback (strengths + one growth angle) and a short spoken pitch (no slides) clarify the idea early. A product roadmap is a strategic view over time, not a detailed project plan; it aligns vision, audience, horizon, metrics, and resources. PoC, prototype, and MVP play different roles: technology check, user interaction learning, then a first market version you can stress-test with real users or buyers. Market and value proposition: account for external forces (macro, industry, trends) and express value as offer + customer benefit. A solid pitch often follows Hook → Believe → Join: lead with the problem, show credibility and differentiation, then make a specific ask. This is a personal write-up based on QcES materials (Spring 2024 cohort) and facilitators; it is not an official program document.\nWhy these pieces fit together In practice you don’t “finish” the BMC once. You state hypotheses, collect qualitative evidence (interviews, observation), adjust segments and value proposition, then prioritize what goes on a roadmap and into a pitch. The loop below summarizes that motion.\nflowchart LR\rhyp[Hypothesis] --\u0026gt; int[Interview]\rint --\u0026gt; ins[Insight]\rins --\u0026gt; bmc[BMC and segment]\rbmc --\u0026gt; road[Roadmap]\rroad --\u0026gt; pit[Pitch] Business model: the BMC as a shared language The BMC bundles nine blocks (segments, value proposition, channels, customer relationships, revenue, resources, activities, partners, cost structure). The core idea: every block matters for survival, and the canvas is a living artifact—revisit it when customers or competitors shift.\nThe coursework stresses sequencing: understand for whom you create value and how you promise it before over-optimizing the rest. The hypothesis–validation loop ties research, interviews, and decisions—you test beliefs about the customer and the problem, not only about technology.\nCustomer segments: from fuzzy to actionable It helps to separate consumer, end user, and buyer (who pays isn’t always who uses). An “early” customer often has a problem, awareness, active solution search, sometimes a hacky workaround, and a plausible budget.\nB2B segmentation tends to be firmographic (size, location, sector, buying dynamics). B2C uses demographics, geography, psychographics, behavior. The exercise is to replace overly broad buckets with testable segments—people you can actually find, contact, and interview.\nSharing the idea and getting feedback A ~90 second, slide-free pitch forces clarity: a problem outsiders can follow, who benefits, the solution, and its advantages. You are not expected to cover market size and competition on day one; it is a clarity filter.\nTo receive feedback: active listening (understand before you rebut), notes, a learning mindset. To give feedback: name a real strength, then one opportunity (“I’d want to hear more about…”, “this would be stronger if…”).\nInterviews: discovery is continuous Interviews are framed as the most direct way to fill gaps in an early BMC. The aim is not to sell or “pitch” your fix during the call, but to learn from someone else’s reality.\nPeople to talk to often extend beyond the “ideal customer”—users of substitutes, experts, suppliers, communities—anyone touched by the problem. And it is not one-and-done: discovery continues through pivots.\nTypical hurdles—finding the right people, booking time, running a useful conversation—yield to preparation and humility. The mantra: fall in love with the problem, not the solution; connect rather than convince.\nRoadmap and maturity: PoC, prototype, MVP A roadmap communicates where you are headed (vision, key initiatives) over a chosen horizon for a specific audience (team, investors, partners). It is not the same as an operational project plan. Before filling it in, prompts include vision, readers, timeline, metrics, scope, resources, and an appropriate format.\nProof of concept: the underlying approach can work. Prototype: explore how users interact; often pre-market. MVP: just enough to put in front of real users or buyers to learn fast (including by watching what breaks). Roadmaps change shape with stage—discovery, validation, growth.\nMarket context and value proposition Understanding the market is more than counting accounts: it places the venture amid external forces—macro (PESTEL-style), industry dynamics, customer behavior, major trends. Those lenses surface constraints and openings.\nValue proposition spans what you deliver (product, service, “vehicle”) and the benefit to the customer. Course slides stress pain points and a hypothesis about why people care—raw material for discovery interviews.\nStorytelling and pitch: Hook, Believe, Join A pitch is a talk that seeks a concrete next step: someone’s time, lab access, intros, funding. Formats vary—from thirty seconds to half an hour, with or without slides.\nA structure that showed up repeatedly:\nHook — start from the problem and why it matters to an identifiable audience; do not open with the solution. Believe — present the solution, differentiation, and evidence (traction, team, recent wins). Join — a specific ask: type of help, amount, expertise, partnership, etc. A typical ~90 second scaffold runs: who you are, problem, offer for a target, value proposition, contrast with alternatives, recent milestone, call to action.\nClosing thought QcES connects tools (BMC, segmentation, roadmap) with behaviors (interviews, feedback, pitching). The thread: keep the problem central, make assumptions testable, and communicate clearly enough that others can help—not only cheer.\nIf you are in a similar program, consistency usually beats polish: a handful of well-run interviews often teaches more than a perfectly decorated canvas.","date":"2026-04-13","date_unix":1776088800,"id":"https://antoineboucher.info/CV/blog/posts/qces-lean-discovery-pitch/","permalink":"https://antoineboucher.info/CV/blog/posts/qces-lean-discovery-pitch/","post_kind":"conference","section":"posts","summary":"Personal synthesis of QcES sessions — business canvas, customer segment, feedback, interviews, roadmap, market fit, value proposition, and storytelling.","tag_refs":[{"name":"QcES","permalink":"https://antoineboucher.info/CV/blog/tags/qces/"},{"name":"Entrepreneurship","permalink":"https://antoineboucher.info/CV/blog/tags/entrepreneurship/"},{"name":"BMC","permalink":"https://antoineboucher.info/CV/blog/tags/bmc/"},{"name":"Lean Startup","permalink":"https://antoineboucher.info/CV/blog/tags/lean-startup/"},{"name":"Pitch","permalink":"https://antoineboucher.info/CV/blog/tags/pitch/"},{"name":"Education","permalink":"https://antoineboucher.info/CV/blog/tags/education/"}],"tags":["QcES","Entrepreneurship","BMC","Lean Startup","Pitch","Education"],"tags_text":"QcES Entrepreneurship BMC Lean Startup Pitch Education","thumb":"/CV/blog/images/post-kind-conference.png","title":"From BMC to pitch — QcES journey notes (Spring 2024)"},{"content":"I published marketwatch on PyPI: a small Python client for the MarketWatch virtual stock game (paper trading), not live brokerage access. If you want to script watchlists, pull game or portfolio data, or experiment with automation against the game, it wraps the flows in a straightforward API.\nLinks Package: pypi.org/project/marketwatch Documentation: antoinebou12.github.io/marketwatch Source \u0026amp; issues: github.com/antoinebou12/marketwatch What it can do Create and manage watchlists Read game details and settings Inspect portfolio, positions, and pending orders Buy and sell (in-game) Fetch the leaderboard for a game Useful if you are exploring automated strategies or small bots inside the game’s rules—see the docs for method names and return shapes.\nQuick start pip install marketwatch from marketwatch import MarketWatch mw = MarketWatch(\u0026#34;your_username\u0026#34;, \u0026#34;your_password\u0026#34;) mw.get_games() mw.get_price(\u0026#34;AAPL\u0026#34;) For login edge cases, every method, and examples for orders and watchlists, use the documentation.\nAutomation can conflict with a platform’s terms or rate limits; use the library responsibly and check MarketWatch’s own rules if you rely on it for anything non-trivial.\nQuestions or bugs are welcome on GitHub.","date":"2026-04-13","date_unix":1776088800,"id":"https://antoineboucher.info/CV/blog/posts/marketwatch-python-trading/","permalink":"https://antoineboucher.info/CV/blog/posts/marketwatch-python-trading/","post_kind":"article","section":"posts","summary":"PyPI package `marketwatch`—a Python client for MarketWatch’s virtual stock game (watchlists, games, portfolio, orders, leaderboard).","tag_refs":[{"name":"Python","permalink":"https://antoineboucher.info/CV/blog/tags/python/"},{"name":"MarketWatch","permalink":"https://antoineboucher.info/CV/blog/tags/marketwatch/"},{"name":"Trading","permalink":"https://antoineboucher.info/CV/blog/tags/trading/"},{"name":"Finance","permalink":"https://antoineboucher.info/CV/blog/tags/finance/"},{"name":"Open Source","permalink":"https://antoineboucher.info/CV/blog/tags/open-source/"}],"tags":["Python","MarketWatch","Trading","Finance","Open Source"],"tags_text":"Python MarketWatch Trading Finance Open Source","thumb":"https://antoineboucher.info/CV/blog/posts/marketwatch-python-trading/featured_hu_2e3aed53151fd1e1.png","title":"Python library for MarketWatch virtual trading"},{"content":"These pages are from a software engineering textbook I used in coursework. I grouped the figures here as a single reference: goal-oriented planning, how the waterfall model sequences activities (with verification and validation paired to each phase), an incremental variant, the textbook’s list of life-cycle subgoals, a reminder that method advice depends on context, and a short ethics passage about impact on people.\nGOALS approach (Figure 3-1) The GOALS flowchart is a top-down pattern: set overall life-cycle goals (functions, constraints, schedule, usability, maintainability), analyze the problem and sketch solution structure, separate concerns into subgoals, develop solutions for each subgoal in parallel where possible, then validate those solutions against the other goals and iterate until the decision “all goals satisfied?” is yes.\nIt reads like an early articulation of what later methodologies still do: decompose, check consistency across concerns, loop rather than assume a single pass is enough.\nSorting out software advice (Figure 3-4) The “sorting out software advice” figure is a scatter of practices — top-down vs outside-in, walkthroughs, independent test teams, chief programmer teams, measurable milestones, configuration management, structured programming, “build it twice,” involve the user, and many more. The surrounding text stresses that the same slogan can be right in one situation and wasteful in another (for example, building a throwaway first version when the domain is unfamiliar vs when it is already well understood).\nUseful as a checklist of ideas to consider, not as a single recipe to apply everywhere.\nEthics: urban school attendance system (Chapter 2 case study) The italic quote on this page is the line I underlined in the margin: individual engineers can improve outcomes for society by paying attention to long-range human and social implications of designs, not only to technical correctness.\nIt pairs naturally with requirements and validation — “the right product” includes who it serves and how it affects them.\nWaterfall life-cycle (Figure 4-1) The classic waterfall diagram shows phases cascading in order. Each box is split diagonally: development work in one triangle and the matching V\u0026amp;V activity in the other — from system feasibility through requirements, product design, detailed design, code, integration, implementation, to operations and maintenance. Backward arrows express rework when a phase’s review finds problems.\nEven when a team does not ship “pure waterfall,” the diagram still names the kinds of artifacts and checks that keep showing up under other names.\nIncremental waterfall (Figure 4-4) The incremental variant anchors a shared product design, then runs parallel or staggered increments through detailed design, coding, integration, and the rest — each increment still carrying the same build / verify pairing and feedback to earlier steps (including back toward product design).\nIt is a structured way to picture delivery in slices without pretending that upstream decisions never change.\nSubgoals, verification, and validation (Chapter 4) This page lists nine engineering subgoals in sequence: feasibility, requirements, product design, detailed design, coding, integration, implementation, maintenance (repeated per update), and phaseout. It then defines verification as correspondence to specification — “Are we building the product right?” — and validation as fitness for the mission — “Are we building the right product?” Configuration management appears alongside V\u0026amp;V as another cross-cutting obligation. A footnote admits the strict sequence is a teaching simplification; prototyping, incremental development, and overlapping work are called out as common adjustments.\nThese scans are for my own revision; figures and wording belong to the original book and publisher. For related notes on compilers and grammars from another text, see the companion post on compiler pipeline, language genealogy, and parse trees.","date":"2026-04-13","date_unix":1776081600,"id":"https://antoineboucher.info/CV/blog/posts/software-engineering-textbook-figures/","permalink":"https://antoineboucher.info/CV/blog/posts/software-engineering-textbook-figures/","post_kind":"article","section":"posts","summary":"Study scans from a software engineering text — goal-oriented decomposition, classic and incremental waterfall, nine life-cycle subgoals, V\u0026V, a collage of process advice, and an ethics case-study takeaway.","tag_refs":[{"name":"Software Engineering","permalink":"https://antoineboucher.info/CV/blog/tags/software-engineering/"},{"name":"SDLC","permalink":"https://antoineboucher.info/CV/blog/tags/sdlc/"},{"name":"Waterfall","permalink":"https://antoineboucher.info/CV/blog/tags/waterfall/"},{"name":"Verification","permalink":"https://antoineboucher.info/CV/blog/tags/verification/"},{"name":"Validation","permalink":"https://antoineboucher.info/CV/blog/tags/validation/"},{"name":"Education","permalink":"https://antoineboucher.info/CV/blog/tags/education/"}],"tags":["Software Engineering","SDLC","Waterfall","Verification","Validation","Education"],"tags_text":"Software Engineering SDLC Waterfall Verification Validation Education","thumb":"https://antoineboucher.info/CV/blog/posts/software-engineering-textbook-figures/images/ethics-urban-school-attendance_hu_4504e477a7e9bdd1.png","title":"Software lifecycle notes — GOALS, waterfall, and verification vs validation"},{"content":"Introduction Blender is a powerhouse for 3D creation, offering a Python API that allows users to extend its functionality with scripts, add-ons, and plugins. However, one challenge developers face is installing external Python packages within Blender’s isolated Python environment.\nUnlike system-wide Python installations, Blender bundles its own Python interpreter, making standard package installations tricky. This article presents a more general and robust method to install Python dependencies for Blender add-ons and plugins — ensuring a smooth workflow across different versions.\nRunning the installer from the Text Editor (Scripting workspace).\nWhy Install External Packages in Blender’s Python? Many advanced Blender add-ons require external Python libraries, such as:\nNumPy \u0026amp; SciPy — Scientific computing and mesh processing Meshio — Converting mesh file formats Pillow — Image processing Requests — Handling HTTP requests for APIs PyTorch/TensorFlow — Machine learning integration Since Blender ships with its own Python environment, these packages must be installed within Blender’s directory rather than the system-wide Python installation.\nA Robust \u0026amp; Generalized Python Script for Add-ons This script ensures the automatic installation of required packages inside Blender’s Python environment. It detects missing modules, and installs them using Blender’s sys.executable, and provides user feedback.\n💡 Features ✔️ Works inside Blender without requiring terminal commands\n✔️ Installs multiple packages automatically\n✔️ Uses a user-writable directory instead of modifying Blender’s core files\n✔️ Runs asynchronously to keep Blender responsive\n📜 The Installation Script import bpy import sys import site import logging import subprocess import threading # Set up logging logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) # List of packages required by the add-on/plugin REQUIRED_PACKAGES = { \u0026#34;fileseq\u0026#34;: \u0026#34;fileseq==1.15.2\u0026#34;, \u0026#34;meshio\u0026#34;: \u0026#34;meshio==5.3.4\u0026#34;, \u0026#34;rich\u0026#34;: \u0026#34;rich==13.7.0\u0026#34;, \u0026#34;requests\u0026#34;: \u0026#34;requests==2.31.0\u0026#34; } def get_blender_python_path(): \u0026#34;\u0026#34;\u0026#34;Returns the path of Blender\u0026#39;s embedded Python interpreter.\u0026#34;\u0026#34;\u0026#34; return sys.executable def get_modules_path(): \u0026#34;\u0026#34;\u0026#34;Return a writable directory for installing Python packages.\u0026#34;\u0026#34;\u0026#34; return bpy.utils.user_resource(\u0026#34;SCRIPTS\u0026#34;, path=\u0026#34;modules\u0026#34;, create=True) def append_modules_to_sys_path(modules_path): \u0026#34;\u0026#34;\u0026#34;Ensure Blender can find installed packages.\u0026#34;\u0026#34;\u0026#34; if modules_path not in sys.path: sys.path.append(modules_path) site.addsitedir(modules_path) def display_message(message, title=\u0026#34;Notification\u0026#34;, icon=\u0026#39;INFO\u0026#39;): \u0026#34;\u0026#34;\u0026#34;Show a popup message in Blender.\u0026#34;\u0026#34;\u0026#34; def draw(self, context): self.layout.label(text=message) def show_popup(): bpy.context.window_manager.popup_menu(draw, title=title, icon=icon) return None # Stops timer bpy.app.timers.register(show_popup) def install_package(package, modules_path): \u0026#34;\u0026#34;\u0026#34;Install a single package using Blender\u0026#39;s Python.\u0026#34;\u0026#34;\u0026#34; try: logger.info(f\u0026#34;Installing {package}...\u0026#34;) subprocess.check_call([ get_blender_python_path(), \u0026#34;-m\u0026#34;, \u0026#34;pip\u0026#34;, \u0026#34;install\u0026#34;, \u0026#34;--upgrade\u0026#34;, \u0026#34;--target\u0026#34;, modules_path, package ]) logger.info(f\u0026#34;{package} installed successfully.\u0026#34;) except subprocess.CalledProcessError as e: logger.error(f\u0026#34;Failed to install {package}. Error: {e}\u0026#34;) display_message(f\u0026#34;Failed to install {package}. Check console for details.\u0026#34;, icon=\u0026#39;ERROR\u0026#39;) def background_install_packages(packages, modules_path): \u0026#34;\u0026#34;\u0026#34;Install missing packages in a background thread.\u0026#34;\u0026#34;\u0026#34; def install_packages(): wm = bpy.context.window_manager wm.progress_begin(0, len(packages)) for i, (module_name, pip_spec) in enumerate(packages.items()): try: __import__(module_name) logger.info(f\u0026#34;\u0026#39;{module_name}\u0026#39; is already installed.\u0026#34;) except ImportError: install_package(pip_spec, modules_path) wm.progress_update(i + 1) wm.progress_end() display_message(\u0026#34;All required packages installed successfully.\u0026#34;) threading.Thread(target=install_packages, daemon=True).start() # Setup modules_path = get_modules_path() append_modules_to_sys_path(modules_path) # Start package installation background_install_packages(REQUIRED_PACKAGES, modules_path) 📌 Step-by-Step Breakdown 1️⃣ Identifying Blender’s Python Path def get_blender_python_path(): return sys.executable Finds Blender’s Python interpreter (python.exe) to ensure pip installs packages correctly. 2️⃣ Choosing the Installation Directory def get_modules_path(): return bpy.utils.user_resource(\u0026#34;SCRIPTS\u0026#34;, path=\u0026#34;modules\u0026#34;, create=True) Installs packages in Blender’s user scripts directory (AppData\\Roaming\\Blender Foundation\\Blender\\\u0026lt;version\u0026gt;\\scripts\\modules). 3️⃣ Ensuring Packages Are Found def append_modules_to_sys_path(modules_path): if modules_path not in sys.path: sys.path.append(modules_path) site.addsitedir(modules_path) Adds the modules directory to Python’s search path (sys.path), ensuring that Blender can find the installed packages. 4️⃣ Installing a Single Package def install_package(package, modules_path): subprocess.check_call([ get_blender_python_path(), \u0026#34;-m\u0026#34;, \u0026#34;pip\u0026#34;, \u0026#34;install\u0026#34;, \u0026#34;--upgrade\u0026#34;, \u0026#34;--target\u0026#34;, modules_path, package ]) Uses Blender’s Python environment to install the required package in the correct directory. 5️⃣ Handling Multiple Packages def background_install_packages(packages, modules_path): threading.Thread(target=install_packages, daemon=True).start() Runs installation in a background thread to prevent Blender from freezing. 6️⃣ Displaying User Messages def display_message(message, title=\u0026#34;Notification\u0026#34;, icon=\u0026#39;INFO\u0026#39;): bpy.app.timers.register(show_popup) Provides popup notifications for a user-friendly installation experience. 🚀 How to Use This Script Option 1: Running Inside Blender Open Blender (Version 4.2+). Go to the Scripting workspace. Open the Text Editor. Paste the script and click Run Script. Blender will automatically install the required packages and display a popup when complete. Option 2: Using as Part of an Add-on Include this script inside your add-on to ensure required dependencies are installed automatically. def register(): \u0026#34;\u0026#34;\u0026#34;Register all classes and set up PointerProperties.\u0026#34;\u0026#34;\u0026#34; modules_path = get_modules_path() append_modules_to_sys_path(modules_path) # Install required packages in the background background_install_packages(REQUIRED_PACKAGES, modules_path) ... 🛠️ Troubleshooting Packages Not Found After Installation?\nRestart Blender after running the script. Manually check sys.path to ensure the correct directory is listed: import sys print(sys.path) ✨ Final Thoughts This generalized method allows Blender users and add-on developers to install Python packages seamlessly within Blender’s sandboxed environment. By automating dependency installation, you can ensure maximum compatibility without requiring users to install external tools manually.\n🔗 Further Reading Blender API Documentation Managing Python in Blender Python Package Installation Guide Originally published on Medium.","date":"2025-02-08","date_unix":1739030400,"id":"https://antoineboucher.info/CV/blog/posts/blender-python-packages/","permalink":"https://antoineboucher.info/CV/blog/posts/blender-python-packages/","post_kind":"tutorial","section":"posts","summary":"Automatic pip installs into Blender’s embedded Python via a user-writable modules folder, background thread, and UI popups.","tag_refs":[{"name":"Blender","permalink":"https://antoineboucher.info/CV/blog/tags/blender/"},{"name":"Python","permalink":"https://antoineboucher.info/CV/blog/tags/python/"},{"name":"Pip","permalink":"https://antoineboucher.info/CV/blog/tags/pip/"},{"name":"Addons","permalink":"https://antoineboucher.info/CV/blog/tags/addons/"}],"tags":["Blender","Python","pip","Addons"],"tags_text":"Blender Python pip Addons","thumb":"https://antoineboucher.info/CV/blog/posts/blender-python-packages/img-001_hu_142f8a3c2fa8a621.png","title":"A Method to Install Python Packages for Add-ons \u0026 Plugins in Blender (Windows, Blender 4.2+)"},{"content":"OpenAI-style plugins expose an HTTP API described by an OpenAPI document so ChatGPT can call your tools safely. FastAPI generates OpenAPI for you, which fits this model well.\n1. Define the API in FastAPI Routes return JSON with stable shapes (no ambiguous free text where structure matters). Add summaries and descriptions on paths and fields — they help the model choose the right tool. 2. Publish openapi.json FastAPI serves /openapi.json by default; the plugin manifest points at this URL (or a static copy you version). Keep schemas tight: enums, required fields, and examples reduce bad calls. 3. Plugin manifest Host ai-plugin.json (or the format required by the current OpenAI developer docs) over HTTPS. Manifest references your API base URL and OpenAPI location. 4. Auth Prefer OAuth or API keys as documented for your integration; never commit secrets. Validate tokens inside FastAPI dependencies or middleware. 5. Deploy HTTPS endpoint reachable from OpenAI’s servers. Logging and idempotency for side-effecting routes. 6. Test manually Call routes with curl or HTTPie using the same payloads the model will send. Iterate on descriptions and constraints before exposing wide traffic. Details change with OpenAI’s platform updates — always follow the latest plugin / tools / actions documentation when wiring production apps.","date":"2024-06-01","date_unix":1717250400,"id":"https://antoineboucher.info/CV/blog/posts/fastapi-chatgpt-plugin-overview/","permalink":"https://antoineboucher.info/CV/blog/posts/fastapi-chatgpt-plugin-overview/","post_kind":"article","section":"posts","summary":"Checklist for a minimal ChatGPT plugin — FastAPI service, OpenAPI schema, auth, and hosting.","tag_refs":[{"name":"ChatGPT","permalink":"https://antoineboucher.info/CV/blog/tags/chatgpt/"},{"name":"OpenAI","permalink":"https://antoineboucher.info/CV/blog/tags/openai/"},{"name":"FastAPI","permalink":"https://antoineboucher.info/CV/blog/tags/fastapi/"},{"name":"Python","permalink":"https://antoineboucher.info/CV/blog/tags/python/"},{"name":"Plugin","permalink":"https://antoineboucher.info/CV/blog/tags/plugin/"},{"name":"Tutorial","permalink":"https://antoineboucher.info/CV/blog/tags/tutorial/"}],"tags":["ChatGPT","OpenAI","FastAPI","Python","Plugin","Tutorial"],"tags_text":"ChatGPT OpenAI FastAPI Python Plugin Tutorial","thumb":"/CV/blog/images/post-kind-article.png","title":"ChatGPT plugin with FastAPI — implementation outline"},{"content":"Introduction In this report, we present an experiment with technical indicators using the BatchBacktesting project available on GitHub at the following link: BatchBacktesting.\nInstalling Dependencies To get started, install the necessary libraries:\n!pip install numpy httpx richp\nImporting Modules Here are the modules to import for the script:\nimport pandas as pd\nimport numpy as np\nfrom datetime import datetime\nimport httpx\nimport concurrent.futures\nimport glob\nimport warnings\nfrom rich.progress import track\nwarnings.filterwarnings(\u0026ldquo;ignore\u0026rdquo;)\nAPI Configuration Replace the placeholders FMP_API_KEY and BINANCE_API_KEY with your actual API keys to access the data from the respective services.\nBASE_URL_FMP = \u0026ldquo;https://financialmodelingprep.com/api/v3\"\nBASE_URL_BINANCE = \u0026ldquo;https://fapi.binance.com/fapi/v1/\"\nFMP_API_KEY = \u0026ldquo;YOUR_FMP_API_KEY\u0026rdquo;\nBINANCE_API_KEY = \u0026ldquo;YOUR_BINANCE_API_KEY\u0026rdquo;\nAPI Request Functions The following functions allow you to make API requests to different endpoints and retrieve historical price data for cryptocurrencies and stocks.\ndef make_api_request(api_endpoint, params):\nwith httpx.Client() as client:\nresponse = client.get(api_endpoint, params=params)\nif response.status_code == 200:\nreturn response.json()\nprint(\u0026ldquo;Error: Failed to retrieve data from API\u0026rdquo;)\nreturn None\ndef get_historical_price_full_crypto(symbol):\napi_endpoint = f\u0026rdquo;{BASE_URL_FMP}/historical-price-full/crypto/{symbol}\u0026rdquo;\nparams = {\u0026ldquo;apikey\u0026rdquo;: FMP_API_KEY}\nreturn make_api_request(api_endpoint, params)\ndef get_historical_price_full_stock(symbol):\napi_endpoint = f\u0026quot;{BASE_URL_FMP}/historical-price-full/{symbol}\u0026quot;\nparams = {\u0026ldquo;apikey\u0026rdquo;: FMP_API_KEY}\nreturn make_api_request(api_endpoint, params)\ndef get_SP500():\napi_endpoint = \u0026ldquo;https://en.wikipedia.org/wiki/List_of_S%26P_500_companies\u0026rdquo;\ndata = pd.read_html(api_endpoint)\nreturn list(data[0][\u0026lsquo;Symbol\u0026rsquo;])\ndef get_all_crypto():\nreturn [\n\u0026ldquo;BTCUSD\u0026rdquo;, \u0026ldquo;ETHUSD\u0026rdquo;, \u0026ldquo;LTCUSD\u0026rdquo;, \u0026ldquo;BCHUSD\u0026rdquo;, \u0026ldquo;XRPUSD\u0026rdquo;, \u0026ldquo;EOSUSD\u0026rdquo;,\n\u0026ldquo;XLMUSD\u0026rdquo;, \u0026ldquo;TRXUSD\u0026rdquo;, \u0026ldquo;ETCUSD\u0026rdquo;, \u0026ldquo;DASHUSD\u0026rdquo;, \u0026ldquo;ZECUSD\u0026rdquo;, \u0026ldquo;XTZUSD\u0026rdquo;,\n\u0026ldquo;XMRUSD\u0026rdquo;, \u0026ldquo;ADAUSD\u0026rdquo;, \u0026ldquo;NEOUSD\u0026rdquo;, \u0026ldquo;XEMUSD\u0026rdquo;, \u0026ldquo;VETUSD\u0026rdquo;, \u0026ldquo;DOGEUSD\u0026rdquo;,\n\u0026ldquo;OMGUSD\u0026rdquo;, \u0026ldquo;ZRXUSD\u0026rdquo;, \u0026ldquo;BATUSD\u0026rdquo;, \u0026ldquo;USDTUSD\u0026rdquo;, \u0026ldquo;LINKUSD\u0026rdquo;, \u0026ldquo;BTTUSD\u0026rdquo;,\n\u0026ldquo;BNBUSD\u0026rdquo;, \u0026ldquo;ONTUSD\u0026rdquo;, \u0026ldquo;QTUMUSD\u0026rdquo;, \u0026ldquo;ALGOUSD\u0026rdquo;, \u0026ldquo;ZILUSD\u0026rdquo;, \u0026ldquo;ICXUSD\u0026rdquo;,\n\u0026ldquo;KNCUSD\u0026rdquo;, \u0026ldquo;ZENUSD\u0026rdquo;, \u0026ldquo;THETAUSD\u0026rdquo;, \u0026ldquo;IOSTUSD\u0026rdquo;, \u0026ldquo;ATOMUSD\u0026rdquo;, \u0026ldquo;MKRUSD\u0026rdquo;,\n\u0026ldquo;COMPUSD\u0026rdquo;, \u0026ldquo;YFIUSD\u0026rdquo;, \u0026ldquo;SUSHIUSD\u0026rdquo;, \u0026ldquo;SNXUSD\u0026rdquo;, \u0026ldquo;UMAUSD\u0026rdquo;, \u0026ldquo;BALUSD\u0026rdquo;,\n\u0026ldquo;AAVEUSD\u0026rdquo;, \u0026ldquo;UNIUSD\u0026rdquo;, \u0026ldquo;RENBTCUSD\u0026rdquo;, \u0026ldquo;RENUSD\u0026rdquo;, \u0026ldquo;CRVUSD\u0026rdquo;, \u0026ldquo;SXPUSD\u0026rdquo;,\n\u0026ldquo;KSMUSD\u0026rdquo;, \u0026ldquo;OXTUSD\u0026rdquo;, \u0026ldquo;DGBUSD\u0026rdquo;, \u0026ldquo;LRCUSD\u0026rdquo;, \u0026ldquo;WAVESUSD\u0026rdquo;, \u0026ldquo;NMRUSD\u0026rdquo;,\n\u0026ldquo;STORJUSD\u0026rdquo;, \u0026ldquo;KAVAUSD\u0026rdquo;, \u0026ldquo;RLCUSD\u0026rdquo;, \u0026ldquo;BANDUSD\u0026rdquo;, \u0026ldquo;SCUSD\u0026rdquo;, \u0026ldquo;ENJUSD\u0026rdquo;\n]\ndef get_financial_statements_lists():\napi_endpoint = f\u0026quot;{BASE_URL_FMP}/financial-statement-symbol-lists\u0026quot;\nparams = {\u0026ldquo;apikey\u0026rdquo;: FMP_API_KEY}\nreturn make_api_request(api_endpoint, params)\nImplementing the EMA Strategy The EMA (Exponential Moving Average) is a type of moving average that places a greater weight and significance on the most recent data points. The EMA reacts more quickly to recent price changes than the simple moving average (SMA), which assigns equal weight to all observations in the period.\nclass EMA(Strategy):\nn1 = 20\nn2 = 80\ndef init(self): close = self.data.Close self.ema20 = self.I(taPanda.ema, close.s, self.n1) self.ema80 = self.I(taPanda.ema, close.s, self.n2) def next(self): price = self.data.Close if crossover(self.ema20, self.ema80): self.position.close() self.buy(sl=0.90 \\* price, tp=1.25 \\* price) elif crossover(self.ema80, self.ema20): self.position.close() self.sell(sl=1.10 \\* price, tp=0.75 \\* price) In this strategy:\nema20 and ema80 are calculated for a given stock or cryptocurrency. A buy signal is generated when ema20 crosses above ema80. A sell signal is generated when ema80 crosses above ema20. Stop loss (sl) and take profit (tp) levels are set to limit potential losses and secure gains. Implementing the MACD Strategy The MACD (Moving Average Convergence Divergence) is a trend-following momentum indicator that shows the relationship between two moving averages of a security’s price. It is calculated by subtracting the 26-period EMA from the 12-period EMA. The result is the MACD line. A nine-day EMA of the MACD called the “signal line” is then plotted on top of the MACD line, which can function as a trigger for buy and sell signals.\nclass MACD(Strategy):\nshort_period = 12\nlong_period = 26\nsignal_period = 9\ndef init(self): close = self.data.Close self.macd = self.I(taPanda.macd, close.s, self.short\\_period, self.long\\_period, self.signal\\_period) def next(self): macd\\_line = self.macd.macd signal\\_line = self.macd.signal if crossover(macd\\_line, signal\\_line): self.position.close() self.buy() elif crossover(signal\\_line, macd\\_line): self.position.close() self.sell() In this strategy:\nmacd_line and signal_line are calculated using short-term (12-period) and long-term (26-period) EMAs. A buy signal is generated when the macd_line crosses above the signal_line. A sell signal is generated when the signal_line crosses above the macd_line. Running Backtests The following functions allow you to process instruments and run backtests with specified strategies.\ndef run_backtests_strategies(instruments, strategies):\nstrategies = [x for x in STRATEGIES if x.__name__ in strategies]\noutputs = []\nwith concurrent.futures.ThreadPoolExecutor() as executor:\nfutures = []\nfor strategy in strategies:\nfuture = executor.submit(run_backtests, instruments, strategy, 4)\nfutures.append(future)\nfor future in concurrent.futures.as_completed(futures):\noutputs.extend(future.result())\nreturn outputs\ndef check_crypto(instrument):\nreturn instrument in get_all_crypto()\ndef check_stock(instrument):\nreturn instrument not in get_financial_statements_lists()\ndef process_instrument(instrument, strategy):\ntry:\nif check_crypto(instrument):\ndata = get_historical_price_full_crypto(instrument)\nelse:\ndata = get_historical_price_full_stock(instrument)\nif data is None or \u0026ldquo;historical\u0026rdquo; not in data:\nprint(f\u0026quot;Error processing {instrument}: No data\u0026quot;)\nreturn None\ndata = clean_data(data)\nbt = Backtest(data, strategy=strategy, cash=100000, commission=0.002, exclusive_orders=True)\noutput = bt.run()\noutput = process_output(output, instrument, strategy)\nreturn output, bt\nexcept Exception as e:\nprint(f\u0026quot;Error processing {instrument}: {str(e)}\u0026quot;)\nreturn None\ndef clean_data(data):\ndata = data[\u0026ldquo;historical\u0026rdquo;]\ndata = pd.DataFrame(data)\ndata.columns = [x.title() for x in data.columns]\ndata = data.drop([\u0026ldquo;Adjclose\u0026rdquo;, \u0026ldquo;Unadjustedvolume\u0026rdquo;, \u0026ldquo;Change\u0026rdquo;, \u0026ldquo;Changepercent\u0026rdquo;, \u0026ldquo;Vwap\u0026rdquo;, \u0026ldquo;Label\u0026rdquo;, \u0026ldquo;Changeovertime\u0026rdquo;], axis=1)\ndata[\u0026ldquo;Date\u0026rdquo;] = pd.to_datetime(data[\u0026ldquo;Date\u0026rdquo;])\ndata.set_index(\u0026ldquo;Date\u0026rdquo;, inplace=True)\ndata = data.iloc[::-1]\nreturn data\ndef process_output(output, instrument, strategy, in_row=True):\nif in_row:\noutput = pd.DataFrame(output).T\noutput[\u0026ldquo;Instrument\u0026rdquo;] = instrument\noutput[\u0026ldquo;Strategy\u0026rdquo;] = strategy.__name__\noutput.pop(\u0026quot;_strategy\u0026quot;)\nreturn output\ndef save_output(output, output_dir, instrument, start, end):\nprint(f\u0026quot;Saving output for {instrument}\u0026quot;)\nfileNameOutput = f\u0026quot;{output_dir}/{instrument}-{start}-{end}.csv\u0026quot;\noutput.to_csv(fileNameOutput)\ndef plot_results(bt, output_dir, instrument, start, end):\nprint(f\u0026quot;Saving chart for {instrument}\u0026quot;)\nfileNameChart = f\u0026quot;{output_dir}/{instrument}-{start}-{end}.html\u0026quot;\nbt.plot(filename=fileNameChart, open_browser=False)\ndef run_backtests(instruments, strategy, num_threads=4, generate_plots=False):\noutputs = []\noutput_dir = f\u0026quot;output/raw/{strategy.__name__}\u0026quot;\noutput_dir_charts = f\u0026quot;output/charts/{strategy.__name__}\u0026quot;\nif not os.path.exists(output_dir):\nos.makedirs(output_dir)\nif not os.path.exists(output_dir_charts):\nos.makedirs(output_dir_charts)\nwith concurrent.futures.ThreadPoolExecutor(max_workers=num_threads) as executor:\nfuture_to_instrument = {executor.submit(process_instrument, instrument, strategy): instrument for instrument in instruments}\nfor future in concurrent.futures.as_completed(future_to_instrument):\ninstrument = future_to_instrument[future]\noutput = future.result()\nif output is not None:\noutputs.append(output[0])\nsave_output(output[0], output_dir, instrument, output[0][\u0026ldquo;Start\u0026rdquo;].to_string().strip().split()[1], output[0][\u0026ldquo;End\u0026rdquo;].to_string().strip().split()[1])\nif generate_plots:\nplot_results(output[1], output_dir_charts, instrument, output[0][\u0026ldquo;Start\u0026rdquo;].to_string().strip().split()[1], output[0][\u0026ldquo;End\u0026rdquo;].to_string().strip().split()[1])\ndata_frame = pd.concat(outputs)\nstart = data_frame[\u0026ldquo;Start\u0026rdquo;].to_string().strip().split()[1]\nend = data_frame[\u0026ldquo;End\u0026rdquo;].to_string().strip().split()[1]\nfileNameOutput = f\u0026quot;output/{strategy.__name__}-{start}-{end}.csv\u0026quot;\ndata_frame.to_csv(fileNameOutput)\nreturn data_frame\nExecuting the Scripts To execute the backtests, use the following functions:\ntickers = get_SP500()\nrun_backtests(tickers, strategy=EMA, num_threads=12, generate_plots=True)\nrun_backtests(tickers, strategy=MACD, num_threads=12, generate_plots=True)\nticker = get_all_crypto()\nrun_backtests(ticker, strategy=EMA, num_threads=12, generate_plots=True)\nrun_backtests(ticker, strategy=MACD, num_threads=12, generate_plots=True)\nThe link you shared corresponds to the output directory of the BatchBacktesting project on GitHub: BatchBacktesting Output Directory. However, it appears that this directory does not contain pre-calculated results. It is likely that the project authors chose not to include test results in the GitHub repository to avoid cluttering the repository with user-specific data.\nGet Antoine Boucher’s stories in your inbox To obtain calculated values for your own tests, you will need to run the script locally on your machine with your chosen parameters and strategies. After executing the script, the results will be saved in the output directory of your local project.\nHere is an example output link for reference: EMA Chart for AAPL.\nResults Analysis Here is an example of the results obtained for the instruments with the highest and lowest returns for EMA:\nTop 5 instruments with the highest returns: BTCBUSD: 293.78% ALB: 205.97% OMGUSD: 199.62% BBWI: 196.82% GRMN: 193.47% Top 5 instruments with the lowest returns: BTTBUSD: -99.93% UAL: -82.63% NCLH: -81.51% LNC: -78.02% CHRW: -76.38% Press enter or click to view image in full size\nConclusion In conclusion, the BatchBacktesting project offers a flexible and powerful approach for testing and analyzing the performance of technical indicators on stock and cryptocurrency markets. The provided functions allow easy integration with financial services APIs and straightforward data manipulation. The experimental results can be used to develop and refine algorithmic trading strategies based on observed performance.\nOriginally published on Medium.","date":"2024-05-30","date_unix":1717095600,"id":"https://antoineboucher.info/CV/blog/posts/multiple-indicators-backtesting/","permalink":"https://antoineboucher.info/CV/blog/posts/multiple-indicators-backtesting/","post_kind":"article","section":"posts","summary":"Batch backtests with BatchBacktesting — EMA and MACD strategies, FMP/Binance APIs, and aggregated results across stocks and crypto.","tag_refs":[{"name":"Python","permalink":"https://antoineboucher.info/CV/blog/tags/python/"},{"name":"Trading","permalink":"https://antoineboucher.info/CV/blog/tags/trading/"},{"name":"Backtesting","permalink":"https://antoineboucher.info/CV/blog/tags/backtesting/"},{"name":"Crypto","permalink":"https://antoineboucher.info/CV/blog/tags/crypto/"}],"tags":["Python","Trading","Backtesting","Crypto"],"tags_text":"Python Trading Backtesting Crypto","thumb":"https://antoineboucher.info/CV/blog/posts/multiple-indicators-backtesting/img-001_hu_2920deee6e320fe9.png","title":"Multiple Technical Indicators Backtesting on Multiple Tickers using Python"},{"content":"As a data enthusiast and LEGO fan, I decided to delve into the world of LEGO using historical data. My goal was to understand the trends, pricing, and characteristics of LEGO sets over time. Using datasets from Rebrickable and analysis tools like Pandas, Matplotlib, and Scikit-Learn, I conducted a comprehensive analysis. Here’s a journey through the history and economics of LEGO sets.\nDataset Overview The datasets used for this analysis include various aspects of LEGO sets, parts, and themes:\ncolors.csv: Information on LEGO colors, including unique IDs, names, RGB values, and transparency. inventories.csv: Inventory details, including unique IDs, versions, and set numbers. inventory_parts.csv: Part inventories, including part numbers, colors, quantities, and spare parts. inventory_sets.csv: Information on which inventory is included in which sets. part_categories.csv: Part categories and their unique IDs. part_relationships.csv: Relationships between different parts. parts.csv: Information on LEGO parts, including part numbers, names, and categories. sets.csv: Details of LEGO sets, including set numbers, names, release years, themes, and part counts. themes.csv: Information on LEGO themes, including unique IDs, names, and parent themes. Press enter or click to view image in full size\nSet Analysis: Trends Over the Years I explored the trends in LEGO sets over the years by visualizing the number of sets released each year and the average number of parts per set.\nsets.groupby(\u0026lsquo;year\u0026rsquo;)[\u0026rsquo;name\u0026rsquo;].nunique().plot(kind=\u0026lsquo;bar\u0026rsquo;)\nplt.title(\u0026ldquo;The Numbers of Sets by Year\u0026rdquo;)\nplt.xlabel(\u0026ldquo;Year\u0026rdquo;)\nplt.ylabel(\u0026ldquo;Numbers\u0026rdquo;)\nplt.show()\nparts_by_year = sets[[\u0026lsquo;year\u0026rsquo;, \u0026rsquo;num_parts\u0026rsquo;]].groupby(\u0026lsquo;year\u0026rsquo;, as_index=False).mean()\nparts_by_year.plot(x=\u0026lsquo;year\u0026rsquo;, y=\u0026lsquo;num_parts\u0026rsquo;, color=\u0026ldquo;purple\u0026rdquo;)\nplt.title(\u0026ldquo;Average Number of Parts by Year\u0026rdquo;)\nplt.xlabel(\u0026ldquo;Year\u0026rdquo;)\nplt.ylabel(\u0026ldquo;Parts\u0026rdquo;)\nplt.show()\nTheme Analysis: Top 10 Themes Identifying the most popular LEGO themes, I plotted the top 10 themes with the most sets.\nset_themes = sets[\u0026ldquo;theme_id\u0026rdquo;].value_counts()\nset_themes = pd.DataFrame({\u0026ldquo;id\u0026rdquo;: set_themes.index, \u0026ldquo;count\u0026rdquo;: set_themes.values})\nset_themes = pd.merge(set_themes, themes, on=\u0026ldquo;id\u0026rdquo;)\nset_themes_no_parent = set_themes[pd.isnull(set_themes[\u0026ldquo;parent_id\u0026rdquo;])]\nset_themes_top_10 = set_themes_no_parent.sort_values(by=[\u0026ldquo;count\u0026rdquo;], ascending=False)[:10]\ntop_10 = set_themes_top_10[\u0026ldquo;count\u0026rdquo;]\ntop_10.index = set_themes_top_10[\u0026ldquo;name\u0026rdquo;]\ntop_10.plot.bar(color=\u0026ldquo;gold\u0026rdquo;, rot=30)\nplt.title(\u0026ldquo;Top 10 Themes That Have Most Sets\u0026rdquo;)\nplt.show()\nGathering Data with a Scraper To obtain historical and current data for LEGO sets, I developed a web scraper using Playwright, asyncio, pydantic, and aiohttp. Initially, I intended to use datasets from Rebrickable, but I found that the specific historical pricing data I wanted wasn’t available. Thus, I turned to BrickEconomy, a website that provides detailed information on LEGO sets, including historical prices. The scraper automates the data collection process, ensuring we have comprehensive data for analysis.\nPress enter or click to view image in full size\nSetting Up the Environment First, we need to install the required packages:\n!pip install playwright asyncio pydantic aiohttp\n!playwright install\nImports and Initial Setup The necessary libraries are imported, and the initial setup is done. Playwright is used for web scraping, asyncio for asynchronous programming, pydantic for data validation, and aiohttp for asynchronous HTTP requests.\nimport csv\nfrom pydantic import BaseModel\nfrom typing import Dict, List, Optional\nfrom playwright.async_api import async_playwright\nimport asyncio\nimport json\nimport re\nfrom datetime import datetime\nData Models We define data models using pydantic to structure the data we will scrape. These models help ensure the data is clean and well-organized.\nclass SetDetails(BaseModel):\nname: str\nvalue: str\nclass HistoryEntry(BaseModel):\ndate: datetime\nnumber: float\ntooltip: Optional[str]\nannotation: Optional[str]\nannotationText: Optional[str]\nclass NewEntry(BaseModel):\ndate: datetime\nvalue1: float\nvalue2: float\nvalue3: float\nvalue4: float\ndescription: Optional[str] = None\nclass LegoSet(BaseModel):\ndetails: List[SetDetails]\npricing: List[SetDetails]\nquick_buy: List[SetDetails]\nset_predictions: List[SetDetails]\nset_facts: str\nsubtheme_analysis: List[SetDetails]\nScraper Class The LegoAPI class is responsible for scraping the data from BrickEconomy. It initializes with a list of LEGO set numbers, navigates to the BrickEconomy website, and extracts the required information.\nclass LegoAPI:\nroot_url = \u0026ldquo;https://www.brickeconomy.com\u0026rdquo;\ndef \\_\\_init\\_\\_(self, set\\_list): self.set\\_list = set\\_list self.output\\_file = \u0026quot;lego\\_sets.csv\u0026quot; async def start(self): try: with open(self.set\\_list, \u0026quot;r\u0026quot;) as f: set\\_list = \\[line.rstrip() for line in f.readlines()\\] except Exception as e: print(\u0026quot;Error opening input file\u0026quot;) raise e async with async\\_playwright() as p: browser = await p.chromium.launch(headless=False) page = await browser.new\\_page() for set\\_num in set\\_list: search\\_url = f\u0026quot;{self.root\\_url}/search?query={set\\_num}\u0026quot; await page.wait\\_for\\_load\\_state(\u0026quot;load\u0026quot;) await page.goto(search\\_url) try: possible\\_links = await page.query\\_selector\\_all( \u0026quot;#ContentPlaceHolder1\\_ctlSetsOverview\\_GridViewSets \u0026gt; tbody \u0026gt; tr:nth-child(2) \u0026gt; td.ctlsets-left \u0026gt; div.mb-5 \u0026gt; h4 \u0026gt; a\u0026quot; ) except Exception as e: raise ValueError(f\u0026quot;Error parsing HTML: {e}\u0026quot;) if not possible\\_links: raise ValueError(f\u0026quot;No links found for set number: {set\\_num}\u0026quot;) for link in possible\\_links: href = await link.get\\_attribute(\u0026quot;href\u0026quot;) print(href) test\\_num = href.split(\u0026quot;/\u0026quot;)\\[2\\].split(\u0026quot;-\u0026quot;)\\[0\\] print(test\\_num) if str(test\\_num) in str(set\\_num): set\\_details = href.split(\u0026quot;/\u0026quot;)\\[2:4\\] await page.goto(self.root\\_url + href) await page.wait\\_for\\_load\\_state(\u0026quot;load\u0026quot;) await self.parse\\_history(page, set\\_num) await self.parse\\_set(page, set\\_details) await browser.close()\rInitialization and Input Handling:\nThe constructor (__init__) initializes the class with a list of LEGO set numbers and the output file name. The start method reads the set numbers from a file and starts the Playwright browser. Navigation and Data Extraction:\nFor each set number, the scraper navigates to the search results page on BrickEconomy. It extracts links to individual set pages and checks if the set number matches. The scraper then navigates to the set’s page and calls methods to parse historical data and set details. Press enter or click to view image in full size\nThe data is in a script data at the end of the html\nParsing Historical Data The parse_history method extracts historical pricing data from the set\u0026rsquo;s page.\nGet Antoine Boucher’s stories in your inbox Join Medium for free to get updates from this writer.\nRemember me for faster sign in\nasync def parse\\_history(self, page, set\\_num): try: script\\_tags = await page.query\\_selector\\_all(\u0026quot;script\u0026quot;) desired\\_script\\_content = None for script\\_tag in script\\_tags: script\\_content = await script\\_tag.inner\\_text() if \u0026quot;data.addRows(\\[\u0026quot; in script\\_content: desired\\_script\\_content = script\\_content break if desired\\_script\\_content: pattern = r\u0026quot;data\\\\.addRows\\\\((\\\\\\[.\\*?\\\\\\]\\\\));\u0026quot; matches = re.findall(pattern, desired\\_script\\_content, re.DOTALL) if matches: history\\_data = matches\\[0\\].replace(\u0026quot;\\\\n\u0026quot;, \u0026quot;\u0026quot;).replace(\u0026quot;null\u0026quot;, \u0026quot;'null'\u0026quot;) history\\_entries = \\[\\] pattern\\_date = re.compile(r\u0026quot;new Date\\\\((\\\\d+), (\\\\d+), (\\\\d+)\\\\), (\\\\d+\\\\.?\\\\d\\*), '(\\[^'\\]\\*)', '(\\[^'\\]\\*)'(?:, '(\\[^'\\]\\*)')?(?:, '(\\[^'\\]\\*)')?\u0026quot;) for match in pattern\\_date.finditer(history\\_data): year, month, day = map(int, match.groups()\\[:3\\]) month += 1 date = datetime(year, month, day) value = match.group(4) currency\\_value = match.group(5) status = match.group(6) if match.group(6) else None description = match.group(7) if match.group(7) else None history\\_entries.append( HistoryEntry( date=date, number=value, tooltip=currency\\_value, annotation=status, annotationText=description, ) ) with open(f\u0026quot;{set\\_num}\\_history.csv\u0026quot;, mode=\u0026quot;w\u0026quot;, newline=\u0026quot;\u0026quot;, encoding=\u0026quot;utf-8\u0026quot;) as file: writer = csv.writer(file) writer.writerow( \\[\u0026quot;Date\u0026quot;, \u0026quot;Value\u0026quot;, \u0026quot;Currency Value\u0026quot;, \u0026quot;Status\u0026quot;, \u0026quot;Description\u0026quot;\\] ) for entry in history\\_entries: writer.writerow( \\[ entry.date, entry.number, entry.tooltip, entry.annotation, entry.annotationText, \\] ) if len(matches) \u0026gt; 1: new\\_data = matches\\[1\\].replace(\u0026quot;\\\\n\u0026quot;, \u0026quot;\u0026quot;).replace(\u0026quot;null\u0026quot;, \u0026quot;'null'\u0026quot;) pattern\\_new = re.compile(r\u0026quot;new Date\\\\((\\\\d+), (\\\\d+), (\\\\d+)\\\\), (\\\\d+\\\\.?\\\\d\\*), (\\\\d+\\\\.?\\\\d\\*), (\\\\d+\\\\.?\\\\d\\*), (\\\\d+\\\\.?\\\\d\\*), '(\\[^'\\]\\*)'\u0026quot;) new\\_entries = \\[\\] for match in pattern\\_new.finditer(new\\_data): year, month, day = map(int, match.groups()\\[:3\\]) month += 1 date = datetime(year, month, day) value1, value2, value3, value4 = map(float, match.groups()\\[3:7\\]) description = match.group(8) new\\_entries.append( NewEntry( date=date, value1=value1, value2=value2, value3=value3, value4=value4, description=description, ) ) with open(f\u0026quot;{set\\_num}\\_new.csv\u0026quot;, mode=\u0026quot;w\u0026quot;, newline=\u0026quot;\u0026quot;, encoding=\u0026quot;utf-8\u0026quot;) as file: writer = csv.writer(file) writer.writerow( \\[\u0026quot;Date\u0026quot;, \u0026quot;Value 1\u0026quot;, \u0026quot;Value 2\u0026quot;, \u0026quot;Value 3\u0026quot;, \u0026quot;Value 4\u0026quot;, \u0026quot;Description\u0026quot;\\] ) for entry in new\\_entries: writer.writerow( \\[ entry.date, entry.value1, entry.value2, entry.value3, entry.value4, entry.description, \\] ) else: pass else: print(\u0026quot;Could not find 'data.addRows(\\[...\\]);' in the script content.\u0026quot;) else: print(\u0026quot;Script tag with 'data.addRows(\\[' not found.\u0026quot;) except Exception as e: print(f\u0026quot;An error occurred while extracting data: {e}\u0026quot;)\rGet Antoine Boucher’s stories in your inbox Extracting Script Content:\nThe method searches for a script tag containing historical data in the data.addRows format. Parsing and Writing Data:\nIf found, it extracts the data and parses it using regular expressions to create HistoryEntry objects. The data is then written to a CSV file. Parsing Set Details The parse_set method extracts various details about the LEGO set, including pricing, quick buy options, predictions, and subtheme analysis.\nasync def parse\\_set(self, page, set\\_details): set\\_details\\_div = await page.query\\_selector(\u0026quot;div#ContentPlaceHolder1\\_SetDetails\u0026quot;) set\\_details\\_rows = await set\\_details\\_div.query\\_selector\\_all(\u0026quot;.row.rowlist\u0026quot;) set\\_info = \\[\\] for row in set\\_details\\_rows: key\\_element = await row.query\\_selector(\u0026quot;.text-muted\u0026quot;) value\\_element = await row.query\\_selector(\u0026quot;.col-xs-7\u0026quot;) if key\\_element and value\\_element: key = await key\\_element.inner\\_text() value = await value\\_element.inner\\_text() set\\_info.append(SetDetails(name=key.strip(), value=value.strip())) set\\_pricing\\_div = await page.query\\_selector(\u0026quot;div#ContentPlaceHolder1\\_PanelSetPricing\u0026quot;) pricing\\_rows = await set\\_pricing\\_div.query\\_selector\\_all(\u0026quot;.row.rowlist\u0026quot;) pricing\\_info = \\[\\] for row in pricing\\_rows: key\\_element = await row.query\\_selector(\u0026quot;.text-muted\u0026quot;) value\\_element = await row.query\\_selector(\u0026quot;.col-xs-7\u0026quot;) if key\\_element and value\\_element: key = await key\\_element.inner\\_text() value = await value\\_element.inner\\_text() pricing\\_info.append(SetDetails(name=key.strip(), value=value.strip())) quick\\_buy\\_div = await page.query\\_selector(\u0026quot;div#ContentPlaceHolder1\\_Panel","date":"2024-05-30","date_unix":1717084800,"id":"https://antoineboucher.info/CV/blog/posts/economics-lego-data-science/","permalink":"https://antoineboucher.info/CV/blog/posts/economics-lego-data-science/","post_kind":"article","section":"posts","summary":"LEGO trends, pricing, and themes from Rebrickable data and a BrickEconomy scraper — Pandas, charts, and linear regression on set 001-1.","tag_refs":[{"name":"LEGO","permalink":"https://antoineboucher.info/CV/blog/tags/lego/"},{"name":"Data Science","permalink":"https://antoineboucher.info/CV/blog/tags/data-science/"},{"name":"Pandas","permalink":"https://antoineboucher.info/CV/blog/tags/pandas/"},{"name":"Scraping","permalink":"https://antoineboucher.info/CV/blog/tags/scraping/"},{"name":"Playwright","permalink":"https://antoineboucher.info/CV/blog/tags/playwright/"}],"tags":["LEGO","Data Science","Pandas","Scraping","Playwright"],"tags_text":"LEGO Data Science Pandas Scraping Playwright","thumb":"https://antoineboucher.info/CV/blog/posts/economics-lego-data-science/img-001_hu_b35e60367712e2a7.png","title":"Economics of LEGO Sets with Data Science"},{"content":"Introduction In this report, we present an experiment with technical indicators using the BatchBacktesting project available on GitHub at the following link: BatchBacktesting.\nInstalling Dependencies To get started, install the necessary libraries:\n!pip install numpy httpx richp\nImporting Modules Here are the modules to import for the script:\nimport pandas as pd\nimport numpy as np\nfrom datetime import datetime\nimport httpx\nimport concurrent.futures\nimport glob\nimport warnings\nfrom rich.progress import track\nwarnings.filterwarnings(\u0026ldquo;ignore\u0026rdquo;)\nAPI Configuration Replace the placeholders FMP_API_KEY and BINANCE_API_KEY with your actual API keys to access the data from the respective services.\nBASE_URL_FMP = \u0026ldquo;https://financialmodelingprep.com/api/v3\"\nBASE_URL_BINANCE = \u0026ldquo;https://fapi.binance.com/fapi/v1/\"\nFMP_API_KEY = \u0026ldquo;YOUR_FMP_API_KEY\u0026rdquo;\nBINANCE_API_KEY = \u0026ldquo;YOUR_BINANCE_API_KEY\u0026rdquo;\nAPI Request Functions The following functions allow you to make API requests to different endpoints and retrieve historical price data for cryptocurrencies and stocks.\ndef make_api_request(api_endpoint, params):\nwith httpx.Client() as client:\nresponse = client.get(api_endpoint, params=params)\nif response.status_code == 200:\nreturn response.json()\nprint(\u0026ldquo;Error: Failed to retrieve data from API\u0026rdquo;)\nreturn None\ndef get_historical_price_full_crypto(symbol):\napi_endpoint = f\u0026rdquo;{BASE_URL_FMP}/historical-price-full/crypto/{symbol}\u0026rdquo;\nparams = {\u0026ldquo;apikey\u0026rdquo;: FMP_API_KEY}\nreturn make_api_request(api_endpoint, params)\ndef get_historical_price_full_stock(symbol):\napi_endpoint = f\u0026quot;{BASE_URL_FMP}/historical-price-full/{symbol}\u0026quot;\nparams = {\u0026ldquo;apikey\u0026rdquo;: FMP_API_KEY}\nreturn make_api_request(api_endpoint, params)\ndef get_SP500():\napi_endpoint = \u0026ldquo;https://en.wikipedia.org/wiki/List_of_S%26P_500_companies\u0026rdquo;\ndata = pd.read_html(api_endpoint)\nreturn list(data[0][\u0026lsquo;Symbol\u0026rsquo;])\ndef get_all_crypto():\nreturn [\n\u0026ldquo;BTCUSD\u0026rdquo;, \u0026ldquo;ETHUSD\u0026rdquo;, \u0026ldquo;LTCUSD\u0026rdquo;, \u0026ldquo;BCHUSD\u0026rdquo;, \u0026ldquo;XRPUSD\u0026rdquo;, \u0026ldquo;EOSUSD\u0026rdquo;,\n\u0026ldquo;XLMUSD\u0026rdquo;, \u0026ldquo;TRXUSD\u0026rdquo;, \u0026ldquo;ETCUSD\u0026rdquo;, \u0026ldquo;DASHUSD\u0026rdquo;, \u0026ldquo;ZECUSD\u0026rdquo;, \u0026ldquo;XTZUSD\u0026rdquo;,\n\u0026ldquo;XMRUSD\u0026rdquo;, \u0026ldquo;ADAUSD\u0026rdquo;, \u0026ldquo;NEOUSD\u0026rdquo;, \u0026ldquo;XEMUSD\u0026rdquo;, \u0026ldquo;VETUSD\u0026rdquo;, \u0026ldquo;DOGEUSD\u0026rdquo;,\n\u0026ldquo;OMGUSD\u0026rdquo;, \u0026ldquo;ZRXUSD\u0026rdquo;, \u0026ldquo;BATUSD\u0026rdquo;, \u0026ldquo;USDTUSD\u0026rdquo;, \u0026ldquo;LINKUSD\u0026rdquo;, \u0026ldquo;BTTUSD\u0026rdquo;,\n\u0026ldquo;BNBUSD\u0026rdquo;, \u0026ldquo;ONTUSD\u0026rdquo;, \u0026ldquo;QTUMUSD\u0026rdquo;, \u0026ldquo;ALGOUSD\u0026rdquo;, \u0026ldquo;ZILUSD\u0026rdquo;, \u0026ldquo;ICXUSD\u0026rdquo;,\n\u0026ldquo;KNCUSD\u0026rdquo;, \u0026ldquo;ZENUSD\u0026rdquo;, \u0026ldquo;THETAUSD\u0026rdquo;, \u0026ldquo;IOSTUSD\u0026rdquo;, \u0026ldquo;ATOMUSD\u0026rdquo;, \u0026ldquo;MKRUSD\u0026rdquo;,\n\u0026ldquo;COMPUSD\u0026rdquo;, \u0026ldquo;YFIUSD\u0026rdquo;, \u0026ldquo;SUSHIUSD\u0026rdquo;, \u0026ldquo;SNXUSD\u0026rdquo;, \u0026ldquo;UMAUSD\u0026rdquo;, \u0026ldquo;BALUSD\u0026rdquo;,\n\u0026ldquo;AAVEUSD\u0026rdquo;, \u0026ldquo;UNIUSD\u0026rdquo;, \u0026ldquo;RENBTCUSD\u0026rdquo;, \u0026ldquo;RENUSD\u0026rdquo;, \u0026ldquo;CRVUSD\u0026rdquo;, \u0026ldquo;SXPUSD\u0026rdquo;,\n\u0026ldquo;KSMUSD\u0026rdquo;, \u0026ldquo;OXTUSD\u0026rdquo;, \u0026ldquo;DGBUSD\u0026rdquo;, \u0026ldquo;LRCUSD\u0026rdquo;, \u0026ldquo;WAVESUSD\u0026rdquo;, \u0026ldquo;NMRUSD\u0026rdquo;,\n\u0026ldquo;STORJUSD\u0026rdquo;, \u0026ldquo;KAVAUSD\u0026rdquo;, \u0026ldquo;RLCUSD\u0026rdquo;, \u0026ldquo;BANDUSD\u0026rdquo;, \u0026ldquo;SCUSD\u0026rdquo;, \u0026ldquo;ENJUSD\u0026rdquo;\n]\ndef get_financial_statements_lists():\napi_endpoint = f\u0026quot;{BASE_URL_FMP}/financial-statement-symbol-lists\u0026quot;\nparams = {\u0026ldquo;apikey\u0026rdquo;: FMP_API_KEY}\nreturn make_api_request(api_endpoint, params)\nImplementing the EMA Strategy The EMA (Exponential Moving Average) is a type of moving average that places a greater weight and significance on the most recent data points. The EMA reacts more quickly to recent price changes than the simple moving average (SMA), which assigns equal weight to all observations in the period.\nclass EMA(Strategy):\nn1 = 20\nn2 = 80\ndef init(self): close = self.data.Close self.ema20 = self.I(taPanda.ema, close.s, self.n1) self.ema80 = self.I(taPanda.ema, close.s, self.n2) def next(self): price = self.data.Close if crossover(self.ema20, self.ema80): self.position.close() self.buy(sl=0.90 \\* price, tp=1.25 \\* price) elif crossover(self.ema80, self.ema20): self.position.close() self.sell(sl=1.10 \\* price, tp=0.75 \\* price) In this strategy:\nema20 and ema80 are calculated for a given stock or cryptocurrency. A buy signal is generated when ema20 crosses above ema80. A sell signal is generated when ema80 crosses above ema20. Stop loss (sl) and take profit (tp) levels are set to limit potential losses and secure gains. Implementing the MACD Strategy The MACD (Moving Average Convergence Divergence) is a trend-following momentum indicator that shows the relationship between two moving averages of a security’s price. It is calculated by subtracting the 26-period EMA from the 12-period EMA. The result is the MACD line. A nine-day EMA of the MACD called the “signal line” is then plotted on top of the MACD line, which can function as a trigger for buy and sell signals.\nclass MACD(Strategy):\nshort_period = 12\nlong_period = 26\nsignal_period = 9\ndef init(self): close = self.data.Close self.macd = self.I(taPanda.macd, close.s, self.short\\_period, self.long\\_period, self.signal\\_period) def next(self): macd\\_line = self.macd.macd signal\\_line = self.macd.signal if crossover(macd\\_line, signal\\_line): self.position.close() self.buy() elif crossover(signal\\_line, macd\\_line): self.position.close() self.sell() In this strategy:\nmacd_line and signal_line are calculated using short-term (12-period) and long-term (26-period) EMAs. A buy signal is generated when the macd_line crosses above the signal_line. A sell signal is generated when the signal_line crosses above the macd_line. Running Backtests The following functions allow you to process instruments and run backtests with specified strategies.\ndef run_backtests_strategies(instruments, strategies):\nstrategies = [x for x in STRATEGIES if x.__name__ in strategies]\noutputs = []\nwith concurrent.futures.ThreadPoolExecutor() as executor:\nfutures = []\nfor strategy in strategies:\nfuture = executor.submit(run_backtests, instruments, strategy, 4)\nfutures.append(future)\nfor future in concurrent.futures.as_completed(futures):\noutputs.extend(future.result())\nreturn outputs\ndef check_crypto(instrument):\nreturn instrument in get_all_crypto()\ndef check_stock(instrument):\nreturn instrument not in get_financial_statements_lists()\ndef process_instrument(instrument, strategy):\ntry:\nif check_crypto(instrument):\ndata = get_historical_price_full_crypto(instrument)\nelse:\ndata = get_historical_price_full_stock(instrument)\nif data is None or \u0026ldquo;historical\u0026rdquo; not in data:\nprint(f\u0026quot;Error processing {instrument}: No data\u0026quot;)\nreturn None\ndata = clean_data(data)\nbt = Backtest(data, strategy=strategy, cash=100000, commission=0.002, exclusive_orders=True)\noutput = bt.run()\noutput = process_output(output, instrument, strategy)\nreturn output, bt\nexcept Exception as e:\nprint(f\u0026quot;Error processing {instrument}: {str(e)}\u0026quot;)\nreturn None\ndef clean_data(data):\ndata = data[\u0026ldquo;historical\u0026rdquo;]\ndata = pd.DataFrame(data)\ndata.columns = [x.title() for x in data.columns]\ndata = data.drop([\u0026ldquo;Adjclose\u0026rdquo;, \u0026ldquo;Unadjustedvolume\u0026rdquo;, \u0026ldquo;Change\u0026rdquo;, \u0026ldquo;Changepercent\u0026rdquo;, \u0026ldquo;Vwap\u0026rdquo;, \u0026ldquo;Label\u0026rdquo;, \u0026ldquo;Changeovertime\u0026rdquo;], axis=1)\ndata[\u0026ldquo;Date\u0026rdquo;] = pd.to_datetime(data[\u0026ldquo;Date\u0026rdquo;])\ndata.set_index(\u0026ldquo;Date\u0026rdquo;, inplace=True)\ndata = data.iloc[::-1]\nreturn data\ndef process_output(output, instrument, strategy, in_row=True):\nif in_row:\noutput = pd.DataFrame(output).T\noutput[\u0026ldquo;Instrument\u0026rdquo;] = instrument\noutput[\u0026ldquo;Strategy\u0026rdquo;] = strategy.__name__\noutput.pop(\u0026quot;_strategy\u0026quot;)\nreturn output\ndef save_output(output, output_dir, instrument, start, end):\nprint(f\u0026quot;Saving output for {instrument}\u0026quot;)\nfileNameOutput = f\u0026quot;{output_dir}/{instrument}-{start}-{end}.csv\u0026quot;\noutput.to_csv(fileNameOutput)\ndef plot_results(bt, output_dir, instrument, start, end):\nprint(f\u0026quot;Saving chart for {instrument}\u0026quot;)\nfileNameChart = f\u0026quot;{output_dir}/{instrument}-{start}-{end}.html\u0026quot;\nbt.plot(filename=fileNameChart, open_browser=False)\ndef run_backtests(instruments, strategy, num_threads=4, generate_plots=False):\noutputs = []\noutput_dir = f\u0026quot;output/raw/{strategy.__name__}\u0026quot;\noutput_dir_charts = f\u0026quot;output/charts/{strategy.__name__}\u0026quot;\nif not os.path.exists(output_dir):\nos.makedirs(output_dir)\nif not os.path.exists(output_dir_charts):\nos.makedirs(output_dir_charts)\nwith concurrent.futures.ThreadPoolExecutor(max_workers=num_threads) as executor:\nfuture_to_instrument = {executor.submit(process_instrument, instrument, strategy): instrument for instrument in instruments}\nfor future in concurrent.futures.as_completed(future_to_instrument):\ninstrument = future_to_instrument[future]\noutput = future.result()\nif output is not None:\noutputs.append(output[0])\nsave_output(output[0], output_dir, instrument, output[0][\u0026ldquo;Start\u0026rdquo;].to_string().strip().split()[1], output[0][\u0026ldquo;End\u0026rdquo;].to_string().strip().split()[1])\nif generate_plots:\nplot_results(output[1], output_dir_charts, instrument, output[0][\u0026ldquo;Start\u0026rdquo;].to_string().strip().split()[1], output[0][\u0026ldquo;End\u0026rdquo;].to_string().strip().split()[1])\ndata_frame = pd.concat(outputs)\nstart = data_frame[\u0026ldquo;Start\u0026rdquo;].to_string().strip().split()[1]\nend = data_frame[\u0026ldquo;End\u0026rdquo;].to_string().strip().split()[1]\nfileNameOutput = f\u0026quot;output/{strategy.__name__}-{start}-{end}.csv\u0026quot;\ndata_frame.to_csv(fileNameOutput)\nreturn data_frame\nExecuting the Scripts To execute the backtests, use the following functions:\ntickers = get_SP500()\nrun_backtests(tickers, strategy=EMA, num_threads=12, generate_plots=True)\nrun_backtests(tickers, strategy=MACD, num_threads=12, generate_plots=True)\nticker = get_all_crypto()\nrun_backtests(ticker, strategy=EMA, num_threads=12, generate_plots=True)\nrun_backtests(ticker, strategy=MACD, num_threads=12, generate_plots=True)\nThe link you shared corresponds to the output directory of the BatchBacktesting project on GitHub: BatchBacktesting Output Directory. However, it appears that this directory does not contain pre-calculated results. It is likely that the project authors chose not to include test results in the GitHub repository to avoid cluttering the repository with user-specific data.\nGet Antoine Boucher’s stories in your inbox To obtain calculated values for your own tests, you will need to run the script locally on your machine with your chosen parameters and strategies. After executing the script, the results will be saved in the output directory of your local project.\nHere is an example output link for reference: EMA Chart for AAPL.\nResults Analysis Here is an example of the results obtained for the instruments with the highest and lowest returns for EMA:\nTop 5 instruments with the highest returns: BTCBUSD: 293.78% ALB: 205.97% OMGUSD: 199.62% BBWI: 196.82% GRMN: 193.47% Top 5 instruments with the lowest returns: BTTBUSD: -99.93% UAL: -82.63% NCLH: -81.51% LNC: -78.02% CHRW: -76.38% Press enter or click to view image in full size\nConclusion In conclusion, the BatchBacktesting project offers a flexible and powerful approach for testing and analyzing the performance of technical indicators on stock and cryptocurrency markets. The provided functions allow easy integration with financial services APIs and straightforward data manipulation. The experimental results can be used to develop and refine algorithmic trading strategies based on observed performance.\nOriginally published on Medium.","date":"2024-05-14","date_unix":1715731200,"id":"https://antoineboucher.info/CV/blog/posts/experimentation-indicateurs-backtesting/","permalink":"https://antoineboucher.info/CV/blog/posts/experimentation-indicateurs-backtesting/","post_kind":"article","section":"posts","summary":"BatchBacktesting walkthrough — EMA and MACD on many tickers, FMP/Binance APIs, and results (English counterpart to the French Medium article).","tag_refs":[{"name":"Python","permalink":"https://antoineboucher.info/CV/blog/tags/python/"},{"name":"Trading","permalink":"https://antoineboucher.info/CV/blog/tags/trading/"},{"name":"Backtesting","permalink":"https://antoineboucher.info/CV/blog/tags/backtesting/"}],"tags":["Python","Trading","Backtesting"],"tags_text":"Python Trading Backtesting","thumb":"https://antoineboucher.info/CV/blog/posts/experimentation-indicateurs-backtesting/img-001_hu_4081a23c315de0b5.png","title":"Experimenting with technical indicators using Python and backtesting"},{"content":"Introduction Creating a robust and scalable web infrastructure can be both complex and costly. However, with the right tools and a little bit of creativity, you can build a cost-effective and efficient solution. In this article, we will walk through setting up a Caddy web server on AWS EC2, integrating it with AWS CloudWatch for monitoring, and using AWS Step Functions and Lambda to automate and streamline operations. This guide aims to provide a comprehensive approach to setting up a low-cost dashboard using these technologies.\nStep 1: Setting Up Caddy on AWS EC2 Caddy is a powerful, easy-to-use web server that provides automatic HTTPS. It is an excellent choice for managing web traffic and reverse proxying. I use caddy for my home assistants at home\nLaunch an EC2 Instance:\nLog in to the AWS Management Console. Navigate to EC2 and launch a new instance. Choose an Amazon Linux 2 AMI (or any preferred Linux distribution). Select an instance type (e.g., t2.micro for the free tier or t4g.nano for 0.10$ a day). Press enter or click to view image in full size\nConfigure security group rules to allow HTTP, HTTPS, and SSH access. Press enter or click to view image in full size\n2. Install Caddy:\nSSH into your EC2 instance and run the following commands to install Caddy:\nsudo yum update -y\nsudo yum install -y yum-utils\nsudo yum-config-manager — add-repo https://dl.cloudsmith.io/public/caddy/stable/rpm.repo\nsudo yum install caddy -y\n3. Configure Caddy:\nGet Antoine Boucher’s stories in your inbox Create a Caddy configuration file (Caddyfile) with your domain and proxy settings. Below is an example configuration:\n{\nemail antoine@antoineboucher.info\nservers {\nmetrics\n}\nadmin :2019\n}\n(log_site) {\nlog {\noutput file /home/ec2-user/caddy/logs/{args[0]}.log {\nroll_size 10mb\nroll_keep 5\nroll_keep_for 168h\n}\nlevel INFO\n}\n}\nantoineboucher.info www.antoineboucher.info {\nimport log_site antoineboucher.info\nreverse_proxy \u0026lt;cloudfront_url\u0026gt;\nhandle_errors {\nredir https://www.github.com/antoinebou12\n}\n}\nlinkedin.antoineboucher.info www.linkedin.antoineboucher.info {\nimport log_site linkedin.antoineboucher.info\nredir https://www.linkedin.com/in/antoineboucher12\n}\nhome.antoineboucher.info www.home.antoineboucher.info {\nimport log_site home.antoineboucher.info\nreverse_proxy http://homeip:port\n}\nStart Caddy:\nsudo caddy reload\nsudo caddy reload\nStep 2: Monitoring with AWS CloudWatch AWS CloudWatch is a monitoring and management service that provides data and actionable insights for AWS, hybrid, and on-premises applications.\nConfigure Caddy to Log to CloudWatch: Modify your Caddy configuration to log directly to CloudWatch Logs. You can use the AWS CLI or SDKs to push logs to CloudWatch. import os\nimport boto3\nfrom datetime import datetime\n# Initialize the CloudWatch client\ncloudwatch = boto3.client(\u0026rsquo;logs\u0026rsquo;, region_name=\u0026lsquo;us-east-1\u0026rsquo;)\n# Define your log group name\nlog_group_name = \u0026lsquo;reverse_proxy\u0026rsquo;\n# Path to your log directory\nlog_directory = \u0026ldquo;/home/ec2-user/caddy/logs\u0026rdquo;\ndef send_log_to_cloudwatch(log_stream_name, log_message):\ntry:\n# Get or create the log stream\nstreams = cloudwatch.describe_log_streams(logGroupName=log_group_name, logStreamNamePrefix=log_stream_name)\nif not streams[\u0026rsquo;logStreams\u0026rsquo;]:\ncloudwatch.create_log_stream(logGroupName=log_group_name, logStreamName=log_stream_name)\n# Send log to CloudWatch\ncloudwatch.put_log_events(\nlogGroupName=log_group_name,\nlogStreamName=log_stream_name,\nlogEvents=[\n{\n\u0026rsquo;timestamp\u0026rsquo;: int(datetime.now().timestamp() * 1000),\n\u0026lsquo;message\u0026rsquo;: log_message\n}\n]\n)\nexcept Exception as e:\nprint(f\u0026quot;Failed to send log to CloudWatch: {str(e)}\u0026quot;)\n# Read logs from files and send to CloudWatch\nfor filename in os.listdir(log_directory):\nif filename.endswith(\u0026quot;.log\u0026quot;):\nlog_stream_name = filename[:-4] # Remove .log from filename to use as stream name\nfile_path = os.path.join(log_directory, filename)\nwith open(file_path, \u0026lsquo;r\u0026rsquo;) as file:\nfor line in file:\nsend_log_to_cloudwatch(log_stream_name, line.strip())\nYou can setup a cronjob at night for the python script inside the ec2 instance\nsudo yum install cronie -y\nsudo systemctl start crond\nsudo systemctl enable crond\nchmod +x /home/ec2-user/cloudwatch.py\ncrontab -e\n0 0 * * * /usr/bin/python3 /home/ec2-user/cloudwatch.py\nCreate CloudWatch Log Group: aws logs create-log-group - log-group-name reverse_proxy\naws logs create-log-group - log-group-name geoip\nSet Up a Lambda Function to Push Logs:\nimport boto3\nimport json\nimport time\nfrom datetime import datetime, timedelta\ndef lambda_handler(event, context):\nclient = boto3.client(\u0026rsquo;logs\u0026rsquo;)\nquery = \u0026quot;\u0026quot;\u0026quot;\nfields @timestamp, @message\n| parse @message /\u0026ldquo;remote_ip\u0026rdquo;: \u0026ldquo;(?\u0026lt;remote_ip\u0026gt;[^\u0026rdquo;]+)\u0026quot;/\n| stats count() by remote_ip\n| sort remote_ip asc\n\u0026quot;\u0026quot;\u0026quot;\nlog\\_group = 'reverse\\_proxy' start\\_query\\_response = client.start\\_query( logGroupName=log\\_group, startTime=int((datetime.now() - timedelta(days=1)).timestamp()), endTime=int(datetime.now().timestamp()), queryString=query ) query\\_id = start\\_query\\_response\\['queryId'\\] response = None max\\_wait\\_time = 30 \\# maximum wait time of 30 seconds start\\_time = time.time() while response is None or response\\['status'\\] == 'Running': if time.time() - start\\_time \u0026gt; max\\_wait\\_time: raise TimeoutError(\u0026quot;Query did not complete within the maximum wait time.\u0026quot;) response = client.get\\_query\\_results(queryId=query\\_id) time.sleep(0.5) \\# Reduced sleep interval to check more frequently ip\\_addresses = \\[\\] for result in response\\['results'\\]: for field in result: if field\\['field'\\] == 'remote\\_ip': ip\\_addresses.append(field\\['value'\\]) return { 'statusCode': 200, 'body': json.dumps({'ip\\_addresses': ip\\_addresses}) }\rStep 3: Automating with AWS Step Functions and Lambda\n{\n\u0026ldquo;Comment\u0026rdquo;: \u0026ldquo;Query CloudWatch Logs and Get IP Geolocation\u0026rdquo;,\n\u0026ldquo;StartAt\u0026rdquo;: \u0026ldquo;QueryLogsInsights\u0026rdquo;,\n\u0026ldquo;States\u0026rdquo;: {\n\u0026ldquo;QueryLogsInsights\u0026rdquo;: {\n\u0026ldquo;Type\u0026rdquo;: \u0026ldquo;Task\u0026rdquo;,\n\u0026ldquo;Resource\u0026rdquo;: \u0026ldquo;arn:aws:lambda:us-east-1:590183756542:function:QueryLogsInsights\u0026rdquo;,\n\u0026ldquo;Next\u0026rdquo;: \u0026ldquo;GetGeolocation\u0026rdquo;\n},\n\u0026ldquo;GetGeolocation\u0026rdquo;: {\n\u0026ldquo;Type\u0026rdquo;: \u0026ldquo;Task\u0026rdquo;,\n\u0026ldquo;Resource\u0026rdquo;: \u0026ldquo;arn:aws:lambda:us-east-1:590183756542:function:GeolocationIP\u0026rdquo;,\n\u0026ldquo;End\u0026rdquo;: true\n}\n}\n}\nLambda Function for CloudWatch Insights Query:\nimport json\nimport urllib3\nimport boto3\nimport time\ndef lambda_handler(event, context):\n# Extract IP addresses from the event\nip_addresses = json.loads(event[\u0026lsquo;body\u0026rsquo;])[\u0026lsquo;ip_addresses\u0026rsquo;]\nhttp = urllib3.PoolManager() results = \\[\\] for ip in ip\\_addresses: response = http.request('GET', f\u0026quot;https://ipinfo.io/{ip}/json\u0026quot;) data = json.loads(response.data.decode('utf-8')) results.append({ 'IP': ip, 'Location': f\u0026quot;{data.get('city')}, {data.get('region')}, {data.get('country')}\u0026quot;, 'Coordinates': data.get('loc'), 'Organization': data.get('org'), 'Timezone': data.get('timezone') }) \\# Log results to CloudWatch Logs log\\_client = boto3.client('logs') log\\_group\\_name = 'geoip' log\\_stream\\_name = 'geolocation\\_results' \\# Ensure the log group exists try: log\\_client.create\\_log\\_group(logGroupName=log\\_group\\_name) except log\\_client.exceptions.ResourceAlreadyExistsException: pass \\# Ensure the log stream exists try: log\\_client.create\\_log\\_stream(logGroupName=log\\_group\\_name, logStreamName=log\\_stream\\_name) except log\\_client.exceptions.ResourceAlreadyExistsException: pass \\# Put log events for each location log\\_events = \\[\\] for result in results: log\\_events.append({ 'timestamp': int(time.time() \\* 1000), \\# Current time in milliseconds 'message': json.dumps(result) }) \\# Split log events into batches of 10 (AWS limit for PutLogEvents) batch\\_size = 10 for i in range(0, len(log\\_events), batch\\_size): response = log\\_client.put\\_log\\_events( logGroupName=log\\_group\\_name, logStreamName=log\\_stream\\_name, logEvents=log\\_events\\[i:i+batch\\_size\\] ) return { 'statusCode': 200, 'body': json.dumps(results) }\rCloudwatch query to unique ip by subdomain\nfields @message\n| parse @message /\u0026ldquo;remote_ip\u0026rdquo;: \u0026ldquo;(?\u0026lt;remote_ip\u0026gt;[^\u0026rdquo;]+)\u0026quot;/\n| stats count_distinct(remote_ip) as unique_ip by remote_ip\n| sort unique_ip desc\nCloudwatch query to fetch the location\nfields @timestamp, @message\n| parse @message /\u0026ldquo;IP\u0026rdquo;: \u0026ldquo;(?[^\u0026rdquo;]+)\u0026quot;, \u0026ldquo;Location\u0026rdquo;: \u0026ldquo;(?\u0026lt;location\u0026gt;[^\u0026rdquo;]+)\u0026quot;/\n| stats count() by ip, location\n| sort count desc\nPress enter or click to view image in full size\nConclusion By integrating Caddy on an AWS EC2 instance with AWS CloudWatch, Step Functions, and Lambda, you can create a robust and scalable web infrastructure with a cost-effective dashboard. This setup not only simplifies the management of your web services but also provides powerful monitoring and automation capabilities, making it easier to maintain and optimize your applications. With these tools, you can achieve a high level of efficiency and reliability without breaking the bank.\nOriginally published on Medium.","date":"2024-05-14","date_unix":1715724e3,"id":"https://antoineboucher.info/CV/blog/posts/caddy-ec2-cloudwatch-lambda/","permalink":"https://antoineboucher.info/CV/blog/posts/caddy-ec2-cloudwatch-lambda/","post_kind":"article","section":"posts","summary":"Caddy on EC2, logs to CloudWatch, Python shipping scripts, and Step Functions plus Lambda for a low-cost ops dashboard.","tag_refs":[{"name":"AWS","permalink":"https://antoineboucher.info/CV/blog/tags/aws/"},{"name":"Caddy","permalink":"https://antoineboucher.info/CV/blog/tags/caddy/"},{"name":"EC2","permalink":"https://antoineboucher.info/CV/blog/tags/ec2/"},{"name":"CloudWatch","permalink":"https://antoineboucher.info/CV/blog/tags/cloudwatch/"},{"name":"Lambda","permalink":"https://antoineboucher.info/CV/blog/tags/lambda/"},{"name":"Step Functions","permalink":"https://antoineboucher.info/CV/blog/tags/step-functions/"}],"tags":["AWS","Caddy","EC2","CloudWatch","Lambda","Step Functions"],"tags_text":"AWS Caddy EC2 CloudWatch Lambda Step Functions","thumb":"https://antoineboucher.info/CV/blog/posts/caddy-ec2-cloudwatch-lambda/img-001_hu_2ffc3ac1a9ed3ee.png","title":"Making Caddy, AWS EC2, CloudWatch, Step Functions, and Lambda Work Together"},{"content":"I’m thrilled to share that I’ve recently obtained the AWS Certified Cloud Practitioner certification from Amazon Web Services (AWS)! This accomplishment represents a significant milestone in my professional journey, and I want to take this opportunity to highlight some of the incredible tools that made this achievement possible.\nAWS Skill Builder and AWS Cloud Quest were instrumental in my preparation, providing an engaging and comprehensive learning experience. In this article, I’ll share my study plan and how these AWS tools can help anyone aiming to enhance their cloud computing skills.\nMy Study Guide: A Two-Week Sprint to Certification Time Commitment: Approximately 60 hours over two weeks\nHere’s a breakdown of the resources I used:\nUltimate AWS Certified Cloud Practitioner CLF-C02 (Udemy — Paid) Press enter or click to view image in full size\nAWS Cloud Quest: Cloud Practitioner (AWS Skill Builder —Free) Press enter or click to view image in full size\n3. AWS Escape Room: Exam Prep for AWS Certified Cloud Practitioner (CLF-C02) (AWS Skill Builder — Paid Free Trial)\nPress enter or click to view image in full size\n4. Free Practice Exam (https://www.w3schools.com/aws/aws_cloudessentials_awscert.php)\nPress enter or click to view image in full size\n5. Exam Prep Enhanced Course: AWS Certified Cloud Practitioner (CLF-C02 — English) (AWS Skill Builder — Paid Free Trial)\nPress enter or click to view image in full size\nAWS Skill Builder AWS Skill Builder is an online learning center designed to help users of all skill levels deepen their understanding of AWS services. It offers a wide range of courses, from beginner to advanced levels, covering various aspects of cloud computing.\nFeatures:\n- Structured Learning Paths: AWS Skill Builder provides well-organized learning paths, making it easy to follow a structured approach to learning. For the Cloud Practitioner certification, the platform offers a dedicated path that covers all the exam objectives.\n- Hands-On Labs: Practical experience is crucial, and Skill Builder includes hands-on labs that allow you to apply what you’ve learned in real-world scenarios.\n- Variety of Content: The platform offers video tutorials, quizzes, and reading materials, catering to different learning styles.\nAWS Cloud Quest: Gamifying Cloud Learning\nAWS Cloud Quest: Cloud Practitioner is a unique, gamified learning experience. It turns cloud learning into an adventure where users complete quests and solve puzzles to learn about AWS services.\n- Interactive Learning\n- Real-World Scenarios\n- Engagement\nAWS Escape Room: A Fun and Challenging Prep Tool\nThe AWS Escape Room is an innovative and engaging way to prepare for the certification exam. It simulates real-life scenarios where you need to solve challenges to “escape” from virtual rooms.\n- Exam Simulation: It provides a realistic exam experience, helping you get used to the format and time constraints.\n- Critical Thinking: The escape room challenges require critical thinking and problem-solving skills, essential for the exam and real-world applications.\n- Interactive: It’s a fun way to test your knowledge and keep yourself engaged during the preparation process.\nPutting It All Together: My Certification Journey\nCombining these resources, I followed a structured study plan:\nWeek 1: Focused on the Ultimate AWS Certified Cloud Practitioner course on Udemy, completing video lectures and hands-on labs. Week 2: Utilized AWS Cloud Quest for interactive learning and engaged with the AWS Escape Room for exam simulation. Also, supplemented with practice exams and the Exam Prep Enhanced Course on AWS Skill Builder. My Experience with Person Vue and Getting Extra Exam Time During my certification process, I took my exam through Pearson VUE. Pearson VUE provides a flexible and convenient way to take certification exams, offering both in-person and online proctoring options. My experience with Pearson VUE was seamless, from scheduling the exam to taking it on the day.\nGet Antoine Boucher’s stories in your inbox One tip I found incredibly helpful was the option to apply for additional exam time. If English is not your primary language, AWS allows you to request a 50% extension for your exam duration. Here’s how you can do it:\nAWS Certification: Before scheduling your exam, go to exam accommodations Press enter or click to view image in full size\nPress enter or click to view image in full size\n2. ID Verification: Ensure you have a valid ID for verification purposes. This is a crucial step for both in-person and online exams.\n3. Schedule Your Exam with the Extension: Once approved, you can schedule your exam with the additional time included. This extension can make a significant difference, allowing you more time to carefully consider each question and reduce exam stress.\nPreparing for the Online Proctored Exam If you choose to take your exam online, there are a few additional considerations to keep in mind:\nLong Wait Times: Be prepared for potentially long wait times before your exam begins. It’s advisable to log in at least 30 minutes before your scheduled time to complete the necessary check-in procedures. Room Setup: Ensure your room is clean, free of any food, and without a second monitor. The proctor will ask you to show a 360-degree view of your room using your webcam to ensure there are no unauthorized materials. Proctoring Software: Pearson VUE uses specific software to administer online exams. Make sure you have this software installed and tested on your computer before the exam day to avoid any technical issues. Quiet Environment: Choose a quiet environment where you won’t be disturbed during the exam. Inform family members or roommates about your exam schedule to minimize interruptions. Getting Your Digital Badge Once you’ve passed your exam, you can proudly share your achievement by obtaining a digital badge. AWS uses Credly to issue digital badges. Here’s how you can get yours:\nPress enter or click to view image in full size\nWait for Notification: After passing the exam, wait for 2–3 days to receive an email notification from Credly that your badge is ready. Create a Credly Account: If you don’t have one already, create a Credly account using the same email address you used for your AWS Certification. Accept Your Badge: Follow the instructions in the email to accept your badge and add it to your Credly account. Share Your Badge: You can now share your digital badge on LinkedIn, your resume, or any other platform to showcase your certification. Press enter or click to view image in full size\nConclusion AWS Skill Builder and AWS Cloud Quest were invaluable resources on my journey to becoming an AWS Certified Cloud Practitioner. These tools provided a comprehensive, engaging, and practical approach to learning AWS, making the certification process not only achievable but enjoyable.\nIf you’re considering pursuing an AWS certification or looking to enhance your cloud skills, I highly recommend leveraging these platforms. They offer a wealth of knowledge, practical experience, and innovative learning methods that cater to various learning styles.\nFeel free to reach out if you have any questions about my study process or need advice on your AWS certification journey. Happy learning!\nReferences AWS Skill Builder AWS Cloud Quest Ultimate AWS Certified Cloud Practitioner CLF-C02 (Udemy) AWS Escape Room: Exam Prep for AWS Certified Cloud Practitioner Free Practice Exam Exam Prep Enhanced Course: AWS Certified Cloud Practitioner Originally published on Medium.","date":"2024-05-14","date_unix":1715716800,"id":"https://antoineboucher.info/CV/blog/posts/aws-certified-cloud-practitioner/","permalink":"https://antoineboucher.info/CV/blog/posts/aws-certified-cloud-practitioner/","post_kind":"article","section":"posts","summary":"Two-week study plan — Udemy, AWS Skill Builder, Cloud Quest, Escape Room, and Pearson VUE tips (extra time, Credly badge).","tag_refs":[{"name":"AWS","permalink":"https://antoineboucher.info/CV/blog/tags/aws/"},{"name":"Certification","permalink":"https://antoineboucher.info/CV/blog/tags/certification/"},{"name":"Cloud","permalink":"https://antoineboucher.info/CV/blog/tags/cloud/"},{"name":"Learning","permalink":"https://antoineboucher.info/CV/blog/tags/learning/"}],"tags":["AWS","Certification","Cloud","Learning"],"tags_text":"AWS Certification Cloud Learning","thumb":"https://antoineboucher.info/CV/blog/posts/aws-certified-cloud-practitioner/img-001_hu_87cb9f3a16f07b36.png","title":"A Journey to AWS Certified Cloud Practitioner"},{"content":"Introduction In finance, decisions are rarely about a single “forecast” price: they are about ranges, tail risk, and how wrong simple models can be. This article walks through a Monte Carlo path simulation in Python: we estimate drift and volatility from historical closes, simulate many future price paths (a geometric Brownian–style discrete step), and summarize the result as a distribution—the right object for risk-style questions (bands, percentiles, coverage against a hold-out period).\nMarkov chain Monte Carlo (MCMC), as in Landauskas and Valakevičius’s paper on stock price modelling, is a different tool: it draws samples from a distribution that need not be a simple Gaussian—for example one built from a kernel density estimate of observed prices—whereas the code below assumes lognormal shocks from estimated drift and volatility. A practical workflow is MCMC (or other inference) for the law of the data, then forward Monte Carlo for multi-step scenarios. This post implements the forward GBM-style step explicitly; see the references and the linked work-in-progress below if you want to push toward paper-style MCMC.\nStep 1: Setting Up the Environment To start, we need to install the necessary Python libraries. These libraries include pandas, numpy, httpx, backtesting, pandas_ta, matplotlib, scipy, rich, and others. Here\u0026rsquo;s how to install them:\nimport pandas as pd\nimport numpy as np\nfrom datetime import datetime\nimport concurrent.futures\nimport warnings\nfrom rich.progress import track\nfrom backtesting import Backtest, Strategy\nimport pandas_ta as ta\nimport matplotlib.pyplot as plt\nfrom scipy.stats import norm\nimport httpx\nwarnings.filterwarnings(\u0026ldquo;ignore\u0026rdquo;)\nStep 2: Defining Utility Functions We need functions to fetch historical stock prices and crypto prices from APIs:\ndef make_api_request(api_endpoint, params):\nwith httpx.Client() as client:\n# Make the GET request to the API\nresponse = client.get(api_endpoint, params=params)\nif response.status_code == 200:\nreturn response.json()\nprint(\u0026ldquo;Error: Failed to retrieve data from API\u0026rdquo;)\nreturn None\ndef get_historical_price_full_crypto(symbol):\napi_endpoint = f\u0026quot;{BASE_URL_FMP}/historical-price-full/crypto/{symbol}\u0026quot;\nparams = {\u0026ldquo;apikey\u0026rdquo;: FMP_API_KEY}\nreturn make_api_request(api_endpoint, params)\ndef get_historical_price_full_stock(symbol):\napi_endpoint = f\u0026quot;{BASE_URL_FMP}/historical-price-full/{symbol}\u0026quot;\nparams = {\u0026ldquo;apikey\u0026rdquo;: FMP_API_KEY}\nreturn make\\_api\\_request(api\\_endpoint, params) def get_SP500():\napi_endpoint = \u0026ldquo;https://en.wikipedia.org/wiki/List_of_S%26P_500_companies\u0026rdquo;\ndata = pd.read_html(api_endpoint)\nreturn list(data[0][\u0026lsquo;Symbol\u0026rsquo;])\ndef get_all_crypto():\nreturn [\n\u0026ldquo;BTCUSD\u0026rdquo;, \u0026ldquo;ETHUSD\u0026rdquo;, \u0026ldquo;LTCUSD\u0026rdquo;, \u0026ldquo;BCHUSD\u0026rdquo;, \u0026ldquo;XRPUSD\u0026rdquo;, \u0026ldquo;EOSUSD\u0026rdquo;,\n\u0026ldquo;XLMUSD\u0026rdquo;, \u0026ldquo;TRXUSD\u0026rdquo;, \u0026ldquo;ETCUSD\u0026rdquo;, \u0026ldquo;DASHUSD\u0026rdquo;, \u0026ldquo;ZECUSD\u0026rdquo;, \u0026ldquo;XTZUSD\u0026rdquo;,\n\u0026ldquo;XMRUSD\u0026rdquo;, \u0026ldquo;ADAUSD\u0026rdquo;, \u0026ldquo;NEOUSD\u0026rdquo;, \u0026ldquo;XEMUSD\u0026rdquo;, \u0026ldquo;VETUSD\u0026rdquo;, \u0026ldquo;DOGEUSD\u0026rdquo;,\n\u0026ldquo;OMGUSD\u0026rdquo;, \u0026ldquo;ZRXUSD\u0026rdquo;, \u0026ldquo;BATUSD\u0026rdquo;, \u0026ldquo;USDTUSD\u0026rdquo;, \u0026ldquo;LINKUSD\u0026rdquo;, \u0026ldquo;BTTUSD\u0026rdquo;,\n\u0026ldquo;BNBUSD\u0026rdquo;, \u0026ldquo;ONTUSD\u0026rdquo;, \u0026ldquo;QTUMUSD\u0026rdquo;, \u0026ldquo;ALGOUSD\u0026rdquo;, \u0026ldquo;ZILUSD\u0026rdquo;, \u0026ldquo;ICXUSD\u0026rdquo;,\n\u0026ldquo;KNCUSD\u0026rdquo;, \u0026ldquo;ZENUSD\u0026rdquo;, \u0026ldquo;THETAUSD\u0026rdquo;, \u0026ldquo;IOSTUSD\u0026rdquo;, \u0026ldquo;ATOMUSD\u0026rdquo;, \u0026ldquo;MKRUSD\u0026rdquo;,\n\u0026ldquo;COMPUSD\u0026rdquo;, \u0026ldquo;YFIUSD\u0026rdquo;, \u0026ldquo;SUSHIUSD\u0026rdquo;, \u0026ldquo;SNXUSD\u0026rdquo;, \u0026ldquo;UMAUSD\u0026rdquo;, \u0026ldquo;BALUSD\u0026rdquo;,\n\u0026ldquo;AAVEUSD\u0026rdquo;, \u0026ldquo;UNIUSD\u0026rdquo;, \u0026ldquo;RENBTCUSD\u0026rdquo;, \u0026ldquo;RENUSD\u0026rdquo;, \u0026ldquo;CRVUSD\u0026rdquo;, \u0026ldquo;SXPUSD\u0026rdquo;,\n\u0026ldquo;KSMUSD\u0026rdquo;, \u0026ldquo;OXTUSD\u0026rdquo;, \u0026ldquo;DGBUSD\u0026rdquo;, \u0026ldquo;LRCUSD\u0026rdquo;, \u0026ldquo;WAVESUSD\u0026rdquo;, \u0026ldquo;NMRUSD\u0026rdquo;,\n\u0026ldquo;STORJUSD\u0026rdquo;, \u0026ldquo;KAVAUSD\u0026rdquo;, \u0026ldquo;RLCUSD\u0026rdquo;, \u0026ldquo;BANDUSD\u0026rdquo;, \u0026ldquo;SCUSD\u0026rdquo;, \u0026ldquo;ENJUSD\u0026rdquo;,\n]\ndef get_financial_statements_lists():\napi_endpoint = f\u0026quot;{BASE_URL_FMP}/financial-statement-symbol-lists\u0026quot;\nparams = {\u0026ldquo;apikey\u0026rdquo;: FMP_API_KEY}\nreturn make_api_request(api_endpoint, params)\nStep 3: Splitting Data into Training and Testing Sets Next, we’ll fetch the historical stock prices for a given symbol and split the data into training and testing sets. We keep two frames: before January 2023 (used to estimate the model and run simulations) and from January 2023 onward (hold-out for comparing simulated ranges to realized prices).\nstock_symbol = \u0026ldquo;AAPL\u0026rdquo;\nstock_prices = get_historical_price_full_stock(stock_symbol)\ndata = pd.DataFrame(stock_prices[\u0026lsquo;historical\u0026rsquo;])\ndef prepare_price_frame(df):\ndf = df.rename(columns={\n\u0026lsquo;open\u0026rsquo;: \u0026lsquo;Open\u0026rsquo;,\n\u0026lsquo;high\u0026rsquo;: \u0026lsquo;High\u0026rsquo;,\n\u0026rsquo;low\u0026rsquo;: \u0026lsquo;Low\u0026rsquo;,\n\u0026lsquo;close\u0026rsquo;: \u0026lsquo;Close\u0026rsquo;,\n\u0026lsquo;volume\u0026rsquo;: \u0026lsquo;Volume\u0026rsquo;,\n})\nrequired_columns = [\u0026lsquo;date\u0026rsquo;, \u0026lsquo;Open\u0026rsquo;, \u0026lsquo;High\u0026rsquo;, \u0026lsquo;Low\u0026rsquo;, \u0026lsquo;Close\u0026rsquo;, \u0026lsquo;Volume\u0026rsquo;]\nreturn df[required_columns].sort_values(by=[\u0026lsquo;date\u0026rsquo;], ascending=True).reset_index(drop=True)\nprices_before_january_2023 = prepare_price_frame(data[data[\u0026lsquo;date\u0026rsquo;] \u0026lt; \u0026lsquo;2023-01-01\u0026rsquo;])\nprices_after_january_2023 = prepare_price_frame(data[data[\u0026lsquo;date\u0026rsquo;] \u0026gt;= \u0026lsquo;2023-01-01\u0026rsquo;])\nplt.figure(figsize=(10, 6))\nplt.title(\u0026lsquo;Stock Prices\u0026rsquo;)\nplt.xlabel(\u0026lsquo;Date\u0026rsquo;)\nplt.ylabel(\u0026lsquo;Price\u0026rsquo;)\nplt.plot(prices_before_january_2023[\u0026lsquo;date\u0026rsquo;], prices_before_january_2023[\u0026lsquo;Close\u0026rsquo;], label=\u0026lsquo;Train (before Jan 2023)\u0026rsquo;)\nplt.plot(prices_after_january_2023[\u0026lsquo;date\u0026rsquo;], prices_after_january_2023[\u0026lsquo;Close\u0026rsquo;], label=\u0026lsquo;Hold-out (from Jan 2023)\u0026rsquo;)\nplt.legend()\nplt.show()\nPress enter or click to view image in full size\nPress enter or click to view image in full size\nStep 4: Monte Carlo Simulation (Forward Paths and Risk Bands) The function below is Monte Carlo simulation of a constant-parameter model: we estimate mean and variance of log returns on the training window, build a daily drift and volatility, then draw many independent Gaussian shocks and propagate price forward. That is not MCMC; there is no Markov chain sampling a posterior here. It is the kind of forward scenario engine you might run after inference. By contrast, Landauskas and Valakevičius (Intellectual Economics, 2011) use MCMC to sample from a distribution shaped by a kernel density estimate of prices (piecewise-linear proposals)—a way to stay close to the empirical law of the data with few parametric assumptions. Our GBM shortcut is simpler; the paper is the reference when you want the data-driven sampling step.\nFor a work-in-progress that extends this line of thought (batch experiments, richer risk views, and moving closer to paper-style MCMC), see this LinkedIn experiment (WIP).\nThe useful outputs for risk are distributions: percentiles of terminal price, prediction-style bands (for example 5th–95th percentile paths), and coverage checks against a hold-out (did realized prices sit where the simulated mass was?).\ndef monte_carlo_simulation(data, days, iterations):\nif isinstance(data, pd.Series):\ndata = data.to_numpy()\nif not isinstance(data, np.ndarray):\nraise TypeError(\u0026ldquo;Data must be a numpy array or pandas Series\u0026rdquo;)\nlog\\_returns = np.log(data\\[1:\\] / data\\[:-1\\]) mean = np.mean(log\\_returns) variance = np.var(log\\_returns) drift = mean - (0.5 \\* variance) daily\\_volatility = np.std(log\\_returns) future\\_prices = np.zeros((days, iterations)) current\\_price = data\\[-1\\] for t in range(days): shocks = drift + daily\\_volatility \\* norm.ppf(np.random.rand(iterations)) future\\_prices\\[t\\] = current\\_price \\* np.exp(shocks) current\\_price = future\\_prices\\[t\\] return future\\_prices\rPress enter or click to view image in full size\nPress enter or click to view image in full size\nVisualisation simulation_days = 364\nmc_iterations = 1000\nmc_prices = monte_carlo_simulation(prices_before_january_2023[\u0026lsquo;Close\u0026rsquo;], simulation_days, mc_iterations)\nlast_train_close = prices_before_january_2023[\u0026lsquo;Close\u0026rsquo;].iloc[-1]\nlast_close_price = np.full((1, mc_iterations), last_train_close)\nmc_prices_combined = np.concatenate((last_close_price, mc_prices), axis=0)\nlast_date = prices_before_january_2023[\u0026lsquo;date\u0026rsquo;].iloc[-1]\nsimulated_dates = pd.date_range(start=last_date, periods=simulation_days + 1)\n# Percentiles across paths at each future step (risk band)\np05 = np.percentile(mc_prices_combined, 5, axis=1)\np50 = np.percentile(mc_prices_combined, 50, axis=1)\np95 = np.percentile(mc_prices_combined, 95, axis=1)\nmean_path = mc_prices_combined.mean(axis=1)\n# Terminal distribution at the last simulated step (VaR-style summaries)\nterminal_prices = mc_prices_combined[simulation_days, :]\nmean_terminal_price = float(np.mean(terminal_prices))\nq5, q50, q95 = np.percentile(terminal_prices, [5, 50, 95])\nterminal_return = terminal_prices / last_train_close - 1.0\nret_q5, ret_q50, ret_q95 = np.percentile(terminal_return, [5, 50, 95])\nhorizon_idx = min(simulation_days, len(prices_after_january_2023) - 1)\nreal_price = float(prices_after_january_2023[\u0026lsquo;Close\u0026rsquo;].iloc[horizon_idx])\nreal_date = prices_after_january_2023[\u0026lsquo;date\u0026rsquo;].iloc[horizon_idx]\nin_90_band = q5 \u0026lt;= real_price \u0026lt;= q95\nprint(f\u0026quot;Simulated horizon: {simulation_days} trading days after {last_date}\u0026quot;)\nprint(f\u0026quot;Mean terminal price: ${mean_terminal_price:.2f}\u0026quot;)\nprint(f\u0026quot;Terminal price percentiles (5 / 50 / 95): ${q5:.2f} / ${q50:.2f} / ${q95:.2f}\u0026quot;)\nprint(f\u0026quot;Terminal simple return vs last train close — 5th / 50th / 95th %ile: {ret_q5*100:.2f}% / {ret_q50*100:.2f}% / {ret_q95*100:.2f}%\u0026quot;)\nprint(f\u0026quot;Hold-out price at aligned step ({real_date}): ${real_price:.2f}\u0026quot;)\nprint(f\u0026quot;Realized price inside simulated 5–95% band: {in_90_band}\u0026quot;)\nplt.figure(figsize=(10, 6))\nfor i in range(mc_iterations):\nplt.plot(simulated_dates, mc_prices_combined[:, i], linewidth=0.5, color=\u0026lsquo;gray\u0026rsquo;, alpha=0.02)\nplt.fill_between(simulated_dates, p05, p95, alpha=0.25, label=\u0026lsquo;5th–95th percentile band\u0026rsquo;)\nplt.plot(simulated_dates, p50, label=\u0026lsquo;Median path\u0026rsquo;, linewidth=2, color=\u0026lsquo;C0\u0026rsquo;)\nplt.plot(simulated_dates, mean_path, label=\u0026lsquo;Mean path\u0026rsquo;, linewidth=2, linestyle=\u0026rsquo;\u0026ndash;\u0026rsquo;, color=\u0026lsquo;C1\u0026rsquo;)\nplt.plot(pd.to_datetime(prices_before_january_2023[\u0026lsquo;date\u0026rsquo;]), prices_before_january_2023[\u0026lsquo;Close\u0026rsquo;], label=\u0026lsquo;Train (before Jan 2023)\u0026rsquo;, linewidth=2, color=\u0026lsquo;black\u0026rsquo;)\nplt.plot(pd.to_datetime(prices_after_january_2023[\u0026lsquo;date\u0026rsquo;]), prices_after_january_2023[\u0026lsquo;Close\u0026rsquo;], label=\u0026lsquo;Hold-out (from Jan 2023)\u0026rsquo;, linewidth=2, color=\u0026lsquo;green\u0026rsquo;)\nplt.axvline(pd.to_datetime(real_date), color=\u0026lsquo;red\u0026rsquo;, linestyle=\u0026rsquo;:\u0026rsquo;, linewidth=1, alpha=0.8, label=\u0026lsquo;Hold-out step aligned to horizon\u0026rsquo;)\nplt.scatter([pd.to_datetime(real_date)], [real_price], color=\u0026lsquo;red\u0026rsquo;, s=40, zorder=5, label=\u0026lsquo;Realized (aligned)\u0026rsquo;)\nplt.title(\u0026lsquo;Monte Carlo Simulation of Stock Prices (with percentile band)\u0026rsquo;)\nplt.xlabel(\u0026lsquo;Date\u0026rsquo;)\nplt.ylabel(\u0026lsquo;Price\u0026rsquo;)\nplt.legend(loc=\u0026lsquo;upper left\u0026rsquo;, fontsize=8)\nplt.show()\nPress enter or click to view image in full size\nPress enter or click to view image in full size\nPress enter or click to view image in full size\nPress enter or click to view image in full size\nConclusion Forward Monte Carlo gives you a distribution of future prices under an assumed dynamics—ideal for percentile bands, tail behaviour, and coverage checks against data you held out. That is a different step from MCMC, which is about sampling from a flexibl","date":"2024-05-14","date_unix":1715691600,"id":"https://antoineboucher.info/CV/blog/posts/predicting-stock-prices-monte-carlo/","permalink":"https://antoineboucher.info/CV/blog/posts/predicting-stock-prices-monte-carlo/","post_kind":"article","section":"posts","summary":"Monte Carlo path simulation in Python from historical returns—risk bands, quantiles, and hold-out coverage checks.","tag_refs":[{"name":"Python","permalink":"https://antoineboucher.info/CV/blog/tags/python/"},{"name":"Finance","permalink":"https://antoineboucher.info/CV/blog/tags/finance/"},{"name":"Monte Carlo","permalink":"https://antoineboucher.info/CV/blog/tags/monte-carlo/"},{"name":"Backtesting","permalink":"https://antoineboucher.info/CV/blog/tags/backtesting/"}],"tags":["Python","Finance","Monte Carlo","Backtesting"],"tags_text":"Python Finance Monte Carlo Backtesting","thumb":"https://antoineboucher.info/CV/blog/posts/predicting-stock-prices-monte-carlo/img-001_hu_66a57fcbf469d696.png","title":"Predicting Stock Prices with Monte Carlo Simulations"},{"content":"Introduction This tutorial will guide you through setting up a Kinectron sketch in p5.js, which includes functionality for stopping and playing the sketch, as well as saving it as a GIF.\nPrerequisites Basic knowledge of JavaScript and p5.js. Kinectron library installed. p5.js library installed. Have a Kinect v2 or Azure Kinect DK. Have a Kinectron server running. Have a local or online environment that supports JavaScript and p5.js. Step 1: Set Up Your Environment Ensure you have the p5.js and Kinectron libraries included in your HTML file.\n\u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;script src=\u0026#34;https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.9.0/p5.min.js\u0026#34; type=\u0026#34;text/javascript\u0026#34; \u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script type=\u0026#34;text/javascript\u0026#34; \u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.9.0/p5.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.9.0/addons/p5.sound.min.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;./client/dist/kinectron-client.js\u0026#34; type=\u0026#34;text/javascript\u0026#34; \u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;sketch.js\u0026#34; type=\u0026#34;text/javascript\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt;\u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; Step 2: Initialize Variables In your sketch.js file, start by defining the necessary variables.\n// Kinectron setup function function Kinectron() { // Define Kinectron joint types } let kinectron = new Kinectron(); const liveData = false; let stopButton, playButton, saveGifButton; let recorded_skeleton; function preload() { // Load recorded data if liveData is false } Step 3: Setup the Canvas and Buttons Create the canvas and buttons for controlling the sketch.\nfunction setup() { createCanvas(640, 480); background(0); if (liveData) { // Initialize Kinectron for live data } else { // Setup for pre-recorded data and buttons } } Step 4: Draw Function Implement the draw function to handle live and recorded data.\nfunction draw() { // Handle live or recorded data } Step 5: Implement Gesture Checking Create functions to check for specific gestures in the Kinectron data.\nfunction checkForGestures(body) { // Check and display each gesture } function isClapping(body) { /* ... */ } function isOKSign(body) { /* ... */ } // Implement other gesture functions Step 6: Create Button Functions Define functions for stop, play, and save GIF buttons.\nfunction stopSketch() { noLoop(); console.log(\u0026#34;Sketch stopped.\u0026#34;); } function playSketch() { loop(); console.log(\u0026#34;Sketch playing.\u0026#34;); } function startCreatingGif() { saveGif(\u0026#39;mySketch\u0026#39;, 5); // Additional code for downloading the GIF } Step 7: Run Your Sketch Run your sketch in a local or online environment that supports JavaScript and p5.js.\nThis tutorial guided you through setting up a Kinectron sketch in p5.js with additional features for controlling the sketch playback and saving it as a GIF. Experiment with different gestures and functionalities to enhance your sketch further.","date":"2024-03-15","date_unix":1710511200,"id":"https://antoineboucher.info/CV/blog/posts/kinectron-p5-sketch-gif/","permalink":"https://antoineboucher.info/CV/blog/posts/kinectron-p5-sketch-gif/","post_kind":"article","section":"posts","summary":"Wire up Kinectron with p5.js, play/stop the sketch, and save output as a GIF (Kinect v2 or Azure Kinect DK).","tag_refs":[{"name":"Kinect","permalink":"https://antoineboucher.info/CV/blog/tags/kinect/"},{"name":"Kinectron","permalink":"https://antoineboucher.info/CV/blog/tags/kinectron/"},{"name":"P5.js","permalink":"https://antoineboucher.info/CV/blog/tags/p5.js/"},{"name":"JavaScript","permalink":"https://antoineboucher.info/CV/blog/tags/javascript/"},{"name":"Creative Coding","permalink":"https://antoineboucher.info/CV/blog/tags/creative-coding/"},{"name":"Tutorial","permalink":"https://antoineboucher.info/CV/blog/tags/tutorial/"}],"tags":["Kinect","Kinectron","p5.js","JavaScript","Creative Coding","Tutorial"],"tags_text":"Kinect Kinectron p5.js JavaScript Creative Coding Tutorial","thumb":"https://antoineboucher.info/CV/blog/posts/kinectron-p5-sketch-gif/mySketch.gif","title":"Kinectron + p5.js — sketch controls and GIF export"},{"content":"Byzantium ran its first Ethereum workshop: a hands-on session where attendees went from a Solidity / ERC-20 starter (via OpenZeppelin) to deploying a token and swapping transfers with each other. Khalil Anis Zabat led the session.\nFull article in French (same slug — you can also switch to FR in the header).\nLinks: LinkedIn thread / Byzantium recap · Deployed contract (short link)\nThanks to the facilitator and everyone who joined.","date":"2024-03-11","date_unix":1710196200,"id":"https://antoineboucher.info/CV/blog/posts/byzantium-solidity-ethereum-workshop/","permalink":"https://antoineboucher.info/CV/blog/posts/byzantium-solidity-ethereum-workshop/","post_kind":"conference","section":"posts","summary":"Short English recap of Byzantium’s first Ethereum workshop — OpenZeppelin ERC-20, deploy, and peer-to-peer token transfers.","tag_refs":[{"name":"Byzantium","permalink":"https://antoineboucher.info/CV/blog/tags/byzantium/"},{"name":"Solidity","permalink":"https://antoineboucher.info/CV/blog/tags/solidity/"},{"name":"Ethereum","permalink":"https://antoineboucher.info/CV/blog/tags/ethereum/"},{"name":"Conference","permalink":"https://antoineboucher.info/CV/blog/tags/conference/"},{"name":"Education","permalink":"https://antoineboucher.info/CV/blog/tags/education/"}],"tags":["Byzantium","Solidity","Ethereum","Conference","Education"],"tags_text":"Byzantium Solidity Ethereum Conference Education","thumb":"https://antoineboucher.info/CV/blog/posts/byzantium-solidity-ethereum-workshop/images/audience-wide_hu_ff5f62a1d65915b.png","title":"Byzantium’s first workshop — Solidity and an ERC-20 token on Ethereum"},{"content":"This code creates a responsive progress bar with four steps. It uses jQuery for dynamic text changes and CSS for styling. Here\u0026rsquo;s a breakdown of its functionality:\nHTML Structure The progress bar is wrapped inside a div with the class progressbar_container. An unordered list (ul) with the class progressbar represents the progress bar. Each step in the progress bar is an li element with the class progressbar_node. The current step is highlighted by adding the class current_node. CSS Styling The .progressbar_container is styled to position the progress bar, manage its size, and center it. Each .progressbar_node represents a step in the progress bar. The :before pseudo-element of .progressbar_node creates circular step indicators with numbers. The :after pseudo-element creates connecting lines between the steps. The current and completed steps are highlighted with a darker color and a solid border. JavaScript Functionality On document ready ($(document).ready), the previous steps to the current step are marked as completed using the class activated_node. A resize event listener ($(window).resize) changes the text of the first step based on the window\u0026rsquo;s width. It toggles between \u0026ldquo;PASSENGER\u0026rdquo; and \u0026ldquo;PASSENGER DETAILS\u0026rdquo;. Notes Ensure you have included the jQuery library to use the jQuery syntax. The resizing functionality helps maintain responsiveness, providing a better experience on different screen sizes. Example Usage To use this progress bar, include the provided HTML in your document. Ensure your CSS is properly linked, and the jQuery library is included for the JavaScript to work correctly.\nEnhancements You can modify the number of steps by adjusting the HTML and potentially tweaking the CSS. Consider adding ARIA attributes for accessibility, making the progress bar usable for screen readers. You could enhance the responsiveness further by using CSS media queries instead of JavaScript for text changes. This progress bar is a great way to visually represent progress through a multi-step process, such as a checkout or registration flow.","date":"2024-02-12","date_unix":1707746400,"id":"https://antoineboucher.info/CV/blog/posts/tutorial-jquery-step-progress-bar/","permalink":"https://antoineboucher.info/CV/blog/posts/tutorial-jquery-step-progress-bar/","post_kind":"tutorial","section":"posts","summary":"Multi-step indicator with numbered nodes, connectors, and responsive label text.","tag_refs":[{"name":"JQuery","permalink":"https://antoineboucher.info/CV/blog/tags/jquery/"},{"name":"CSS","permalink":"https://antoineboucher.info/CV/blog/tags/css/"},{"name":"Progress","permalink":"https://antoineboucher.info/CV/blog/tags/progress/"},{"name":"Tutorial","permalink":"https://antoineboucher.info/CV/blog/tags/tutorial/"},{"name":"Frontend","permalink":"https://antoineboucher.info/CV/blog/tags/frontend/"}],"tags":["jQuery","CSS","Progress","Tutorial","Frontend"],"tags_text":"jQuery CSS Progress Tutorial Frontend","thumb":"/CV/blog/images/post-kind-tutorial.png","title":"Responsive step progress bar (jQuery + CSS)"},{"content":"Tutorial: Building a Screen Capture Utility with HTML, CSS, and JavaScript Introduction This tutorial demonstrates how to create a screen capture utility in a web application. We will use HTML for the structure, CSS for styling, and JavaScript for functionality.\nPrerequisites Basic understanding of HTML, CSS, and JavaScript A modern web browser with support for getDisplayMedia HTML Setup First, we create the HTML structure with buttons for starting and stopping the screen capture and a section to display the video.\n\u0026lt;p\u0026gt; \u0026lt;button id=\u0026#34;start\u0026#34;\u0026gt;Start Capture\u0026lt;/button\u0026gt; \u0026lt;button id=\u0026#34;stop\u0026#34; class=\u0026#34;hidden\u0026#34;\u0026gt;Stop Capture\u0026lt;/button\u0026gt; \u0026lt;/p\u0026gt; \u0026lt;div class=\u0026#34;wrapper-video\u0026#34;\u0026gt;\u0026lt;/div\u0026gt; \u0026lt;br\u0026gt; \u0026lt;strong class=\u0026#34;log-title\u0026#34;\u0026gt;Log:\u0026lt;/strong\u0026gt; \u0026lt;br\u0026gt; \u0026lt;pre id=\u0026#34;log\u0026#34;\u0026gt;\u0026lt;/pre\u0026gt; CSS Styling Next, style the elements for a better user interface.\n#video { display: table-cell; border: 1px solid #999; width: 100%; max-width: 1080px; } .wrapper-video { display: table; width: 100%; max-width: 1082px; } .recording-border { border: 1px solid red; } .error-background-color { background-color: red; } .error { color: red; } .warn { color: orange; } .info { color: darkgreen; } .hidden { display: none; } .log-title { margin-top: 8px; } JavaScript Functionality Implement the JavaScript to handle screen capture and logging.\nconst $logElem = $(\u0026#34;#log\u0026#34;); const $startElem = $(\u0026#34;#start\u0026#34;); const $stopElem = $(\u0026#34;#stop\u0026#34;); var displayMediaOptions = { video: { cursor: \u0026#39;never\u0026#39;, displaySurface: \u0026#39;browser\u0026#39; }, audio: false }; $startElem.on(\u0026#39;click\u0026#39;, function(evt) { startCapture(); }); $stopElem.on(\u0026#39;click\u0026#39;, function(evt) { stopCapture(); }); console.log = msg =\u0026gt; $logElem.append(`${msg}\u0026lt;br\u0026gt;`); console.error = msg =\u0026gt; $logElem.append(`\u0026lt;span class=\u0026#34;error\u0026#34;\u0026gt;${msg}\u0026lt;/span\u0026gt;\u0026lt;br\u0026gt;`); console.warn = msg =\u0026gt; $logElem.append(`\u0026lt;span class=\u0026#34;warn\u0026#34;\u0026gt;${msg}\u0026lt;span\u0026gt;\u0026lt;br\u0026gt;`); console.info = msg =\u0026gt; $logElem.append(`\u0026lt;span class=\u0026#34;info\u0026#34;\u0026gt;${msg}\u0026lt;/span\u0026gt;\u0026lt;br\u0026gt;`); async function startCapture() { $logElem.text(\u0026#39;\u0026#39;); try { $(\u0026#39;.wrapper-video\u0026#39;).addClass(\u0026#34;recording-border\u0026#34;).append(\u0026#39;\u0026lt;video id=\u0026#34;video\u0026#34; autoplay\u0026gt;\u0026lt;/video\u0026gt;\u0026#39;); $stopElem.removeClass(\u0026#39;hidden\u0026#39;); $startElem.addClass(\u0026#39;hidden\u0026#39;); $(\u0026#39;#video\u0026#39;).removeClass(\u0026#39;error-background-color\u0026#39;); document.getElementById(\u0026#34;video\u0026#34;).srcObject = await navigator.mediaDevices.getDisplayMedia(displayMediaOptions); dumpOptionsInfo(); } catch(err) { $(\u0026#39;#video\u0026#39;).addClass(\u0026#39;error-background-color\u0026#39;); // Handle different types of errors here } } function stopCapture(evt) { let tracks = document.getElementById(\u0026#39;video\u0026#39;).srcObject.getTracks(); tracks.forEach(track =\u0026gt; track.stop()); document.getElementById(\u0026#34;video\u0026#34;).srcObject = null; $stopElem.addClass(\u0026#34;hidden\u0026#34;); $startElem.removeClass(\u0026#34;hidden\u0026#34;); $(\u0026#34;.wrapper-video\u0026#34;).removeClass(\u0026#34;recording-border\u0026#34;).text(\u0026#34;\u0026#34;); $logElem.text(\u0026#34;\u0026#34;); } function dumpOptionsInfo() { const videoTrack = document.getElementById(\u0026#34;video\u0026#34;).srcObject.getVideoTracks()[0]; console.info(\u0026#34;Track settings:\u0026#34;); console.info(JSON.stringify(videoTrack.getSettings(), null, 2)); console.info(\u0026#34;Track constraints:\u0026#34;); console.info(JSON.stringify(videoTrack.getConstraints(), null, 2)); } Error Handling Add appropriate error handling in the catch block of the startCapture function for different types of errors.\nConclusion With this setup, you can start and stop screen capture in your web application. The log section will display information about the screen capture and any errors encountered. This utility can be useful in various applications like tutorials, presentations, or remote assistance tools.","date":"2024-02-10","date_unix":1707573600,"id":"https://antoineboucher.info/CV/blog/posts/tutorial-webrtc-screen-capture/","permalink":"https://antoineboucher.info/CV/blog/posts/tutorial-webrtc-screen-capture/","post_kind":"tutorial","section":"posts","summary":"Start/stop screen sharing with the Screen Capture API, a video element, and simple logging.","tag_refs":[{"name":"WebRTC","permalink":"https://antoineboucher.info/CV/blog/tags/webrtc/"},{"name":"JavaScript","permalink":"https://antoineboucher.info/CV/blog/tags/javascript/"},{"name":"HTML","permalink":"https://antoineboucher.info/CV/blog/tags/html/"},{"name":"Tutorial","permalink":"https://antoineboucher.info/CV/blog/tags/tutorial/"},{"name":"Frontend","permalink":"https://antoineboucher.info/CV/blog/tags/frontend/"}],"tags":["WebRTC","JavaScript","HTML","Tutorial","Frontend"],"tags_text":"WebRTC JavaScript HTML Tutorial Frontend","thumb":"/CV/blog/images/post-kind-tutorial.png","title":"Screen capture in the browser (getDisplayMedia)"},{"content":"Small experiments and UI demos published on CodePen:\nPen — BMdzwx Details — xMXNyy Canvas blackboard — MLEdxr Pen — JxrqQx Pen — byVQKJ Pen — jjzxER Pen — qzQpYg Pen — ZEENwWB Three.js wave — rNoqVOj Longer write-ups for some of these live under Posts (blackboard and Three.js tutorials).","date":"2024-01-10","date_unix":1704895200,"id":"https://antoineboucher.info/CV/blog/posts/codepen-demos-antoinebou13/","permalink":"https://antoineboucher.info/CV/blog/posts/codepen-demos-antoinebou13/","post_kind":"article","section":"posts","summary":"Quick links to interactive pens on CodePen — canvas, Three.js, UI widgets, and more.","tag_refs":[{"name":"CodePen","permalink":"https://antoineboucher.info/CV/blog/tags/codepen/"},{"name":"Frontend","permalink":"https://antoineboucher.info/CV/blog/tags/frontend/"},{"name":"JavaScript","permalink":"https://antoineboucher.info/CV/blog/tags/javascript/"},{"name":"CSS","permalink":"https://antoineboucher.info/CV/blog/tags/css/"}],"tags":["CodePen","Frontend","JavaScript","CSS"],"tags_text":"CodePen Frontend JavaScript CSS","thumb":"/CV/blog/images/post-kind-article.png","title":"CodePen demos (collection)"},{"content":"Introduction Welcome to my personal blog, a chronicle of my journey in developing a multifaceted portfolio using Hugo. As a software engineer, I am excited to share the nuances of building a dynamic and interactive website, where my professional skills intersect with personal passions. This inaugural post marks the beginning of a series in which I\u0026rsquo;ll delve into various aspects of web development, data analysis, and the integration of advanced web technologies.\nWhat to Expect Visual Storytelling This blog will feature screenshots of the Hugo interface, illustrative graphics, and maps to enhance the storytelling aspect.\nHelpful Resources I\u0026rsquo;ll share resources such as Hugo templates, links to my Substack, and tech tools like particles.js, providing a valuable resource trove for budding software engineers and web developers.\nChoosing Hugo and the HBTheme When deciding to create my portfolio, I was drawn to Hugo for its reputation for speed and flexibility. As a software engineer, efficiency and scalability are always at the forefront of my decision-making. After exploring several options, I chose the HBTheme for its comprehensive feature set. The theme\u0026rsquo;s support for comments, its seamless integration with npm (Node Package Manager), and the availability of a variety of Hugo modules made it an obvious choice. These features not only enhanced my blog\u0026rsquo;s functionality but also aligned perfectly with my professional workflow, enabling me to implement advanced web technologies with ease.\nMerging Data Analysis with Web Development One of my initial projects involved a detailed analysis of car thefts in Montreal. The project was not just an exercise in data analysis, but also a personal endeavor, inspired by stories from friends and family. Using Hugo\u0026rsquo;s capabilities, I was able to embed and showcase these complex datasets in an accessible and engaging manner. This integration exemplified Hugo\u0026rsquo;s capacity to handle data-intensive content, a crucial aspect for any software engineer looking to present technical work in a clear and compelling manner. You can view the study here: Étude des vols de voitures à Montréal.\nThe Versatility of Substack Alongside my Hugo blog, I\u0026rsquo;ve ventured into Substack. This platform offers a distinct ecosystem conducive to in-depth writing and engaged readership. My Substack page serves as a complementary space where I delve deeper into topics that require more expansive coverage. It allows me to reach a broader audience and provides a different format for interaction and discussion.\nMerging Professional Development and Blogging This blog also serves as a platform for discussing academic and professional development. I plan to share insights on pursuing advanced degrees and balancing them with career objectives. As someone working to complete his master\u0026rsquo;s degree, I\u0026rsquo;ll explore how academic knowledge can be applied in practical software development and the broader tech industry.\nCrafting a Unique Online Presence In creating my blog, I paid special attention to aesthetics and functionality. By experimenting with various Hugo themes and incorporating interactive elements like particles.js, I aimed to create a visually appealing and user-friendly interface. These design choices reflect my belief in the importance of a clean and efficient user experience, a philosophy I carry over from my software engineering background.\nConclusion This blog is an embodiment of my journey in the tech world, blending personal experiences with professional growth. Through this platform, I aim to share my insights into software engineering, web development, and much more. I invite you to join me on this exploration, to learn, to be inspired, and to discover the endless possibilities in the world of technology.","date":"2024-01-06","date_unix":1704549600,"id":"https://antoineboucher.info/CV/blog/posts/portfolio-hugo-week-1/","permalink":"https://antoineboucher.info/CV/blog/posts/portfolio-hugo-week-1/","post_kind":"article","section":"posts","summary":"Starting a Hugo-based portfolio — themes, data-heavy pages, and tying in Substack.","tag_refs":[{"name":"Hugo","permalink":"https://antoineboucher.info/CV/blog/tags/hugo/"},{"name":"Portfolio","permalink":"https://antoineboucher.info/CV/blog/tags/portfolio/"},{"name":"Static Site","permalink":"https://antoineboucher.info/CV/blog/tags/static-site/"},{"name":"Substack","permalink":"https://antoineboucher.info/CV/blog/tags/substack/"}],"tags":["Hugo","Portfolio","Static Site","Substack"],"tags_text":"Hugo Portfolio Static Site Substack","thumb":"https://antoineboucher.info/CV/blog/posts/portfolio-hugo-week-1/featured_hu_493ef8870764b396.png","title":"Create a portfolio with Hugo (week 1)"},{"content":"Introduction We used the Rhino app on an iPhone with LiDAR to scan our apartment and make clearer decisions about layout and furniture.\nLiDAR and Rhino LiDAR captures depth quickly; Rhino on iPhone turns those scans into workable 3D geometry for review on device.\nProcess We walked room by room while the phone mapped space; Rhino updated the model as we moved.\nScreenshots Still frames from the scan Conclusion Handheld LiDAR plus a focused modeling app is enough for early layout exploration before committing to bigger CAD or renovation steps.","date":"2024-01-02","date_unix":1704204e3,"id":"https://antoineboucher.info/CV/blog/posts/rhino-lidar-apartment-scan/","permalink":"https://antoineboucher.info/CV/blog/posts/rhino-lidar-apartment-scan/","post_kind":"article","section":"posts","summary":"Using Rhino’s iPhone app and LiDAR to scan rooms and reason about layout, furniture, and space.","tag_refs":[{"name":"LiDAR","permalink":"https://antoineboucher.info/CV/blog/tags/lidar/"},{"name":"Rhino","permalink":"https://antoineboucher.info/CV/blog/tags/rhino/"},{"name":"IOS","permalink":"https://antoineboucher.info/CV/blog/tags/ios/"},{"name":"3D Scanning","permalink":"https://antoineboucher.info/CV/blog/tags/3d-scanning/"},{"name":"Architecture","permalink":"https://antoineboucher.info/CV/blog/tags/architecture/"}],"tags":["LiDAR","Rhino","iOS","3D Scanning","Architecture"],"tags_text":"LiDAR Rhino iOS 3D Scanning Architecture","thumb":"https://antoineboucher.info/CV/blog/posts/rhino-lidar-apartment-scan/featured_hu_8c29c463a4f4989d.jpg","title":"LiDAR apartment scan with Rhino on iPhone"},{"content":"This site’s bio sums up the slice of the field I care about most: backend, platform, and DevSecOps. This post is a longer look at how I think about that journey — not a timeline of jobs, but the ideas that kept showing up once I stopped treating “shipping features” as the only scoreboard.\nFrom features to systems Early on, progress often feels linear: tickets closed, endpoints added, screens shipped. That work matters. Over time, though, the interesting problems sit one level up: how services talk to each other, how failures propagate, how a change in one team’s repo affects everyone else on Monday morning. Backend engineering stops being “write the handler” and becomes “design something that stays understandable when you’re not in the room.”\nPlatform thinking extends that outward. If you have ever set up CI, standardized logging, or made it easier for another developer to run the stack locally, you have already done platform work. The goal is to lower the tax on everyday work: fewer one-off runbooks, fewer “works on my machine” threads, more repeatable paths from idea to production.\nReliability and ownership Reliability is not only uptime graphs. It is also whether teammates trust the system enough to move fast. That trust comes from clear ownership (who fixes what when it breaks), observable behavior (metrics, traces, logs that answer real questions), and changes that are small enough to reason about.\nI have learned to treat incidents and near-misses as design feedback. Postmortems are useful, but the deeper win is folding those lessons into defaults: better alerts, safer deploys, clearer boundaries between components. Ownership means carrying that loop even when no one assigned a ticket for it.\nSecurity as part of delivery DevSecOps, for me, is not a separate gate at the end of a sprint. It is shifting concern left in ways that fit real teams: dependency hygiene, secret handling, least-privilege access, and threat modeling that is short enough to happen in a normal planning conversation. Security work that only lives in a specialist’s head does not scale; security habits that live in pipelines and conventions do.\nThe same mindset applies to third-party services and cloud resources. If you cannot explain what exposes what, you do not yet have a deployable story — you have a gamble with good branding.\nLearning and tools Tools change constantly. Frameworks, cloud APIs, and AI-assisted workflows will keep evolving. What compounds is judgment: knowing when to adopt something, when to wrap it, and when to say no. I still read docs, break things in sandboxes, and borrow ideas from open source and from other engineers’ write-ups (including the messier ones — those often contain the constraints that matter).\nI also value writing — short posts, diagrams, internal notes — because explaining something badly is often the first step toward understanding it well.\nWhat I optimize for next Going forward, I care about the same themes with sharper edges: clearer platforms, safer delivery, and systems that stay legible as they grow. If you are early in your career, my biased advice is to chase problems where you can see the whole loop: code, deploy, operate, improve. That loop is where backend, platform, and DevSecOps stop being buzzwords and become the job.\nThanks for reading — if any of this resonates, you will find more concrete notes elsewhere on this site under posts and projects.","date":"2023-12-30","date_unix":1703944800,"id":"https://antoineboucher.info/CV/blog/posts/software-engineering-journey/","permalink":"https://antoineboucher.info/CV/blog/posts/software-engineering-journey/","post_kind":"article","section":"posts","summary":"Reflections on growing as a software engineer across backend systems, platform work, and DevSecOps — what stuck and what I optimize for now.","tag_refs":[{"name":"Software Engineering","permalink":"https://antoineboucher.info/CV/blog/tags/software-engineering/"},{"name":"Career","permalink":"https://antoineboucher.info/CV/blog/tags/career/"},{"name":"Learning","permalink":"https://antoineboucher.info/CV/blog/tags/learning/"},{"name":"Backend","permalink":"https://antoineboucher.info/CV/blog/tags/backend/"},{"name":"DevSecOps","permalink":"https://antoineboucher.info/CV/blog/tags/devsecops/"},{"name":"Platform Engineering","permalink":"https://antoineboucher.info/CV/blog/tags/platform-engineering/"}],"tags":["Software Engineering","Career","Learning","Backend","DevSecOps","Platform Engineering"],"tags_text":"Software Engineering Career Learning Backend DevSecOps Platform Engineering","thumb":"/CV/blog/images/post-kind-article.png","title":"My journey in software engineering"},{"content":"\nAs artificial intelligence continues to advance, more and more companies are incorporating AI-powered chatbots into their customer service systems. These chatbots can handle a wide range of customer inquiries, from simple questions to more complex issues. However, the cost of implementing and maintaining such chatbots is an important factor to consider. In this article, we will calculate the costs of using GPT-4 and GPT-3.5-turbo models with a message cap of 25 messages every 3 hours for one month of usage, considering the same average prompt sizes (50 to 200 tokens).\nOpenAI API The OpenAI API can be used for a wide range of natural language processing (NLP) tasks. Some of the most common uses of the API include:\nLanguage Translation: Translate text from one language to another, making it easy to communicate with people who speak different languages. Text Generation: Generate new text based on a given prompt or input. This can be used to generate headlines, summaries, and even entire articles. Text Summarization: Summarize long documents or articles into shorter, more concise versions. This can save time and make it easier to read and understand important information. Chatbot Development: Develop chatbots that can understand and respond to natural language input. This can be used for customer service, virtual assistants, and other applications. Question Answering: Answer questions with high accuracy and fluency; this feature is available with all GPT-3 engines. Language Understanding: Understand the meaning of text; this can be used to analyze customer feedback, analyze customer sentiment, and more. Text Completion: Complete text based on a given prompt; this can be used for autocompleting forms, writing emails, and more. Text Classification: Classify text into different categories, such as spam or not spam, positive or negative sentiment, etc. Model Comparison GPT-4 GPT-4 offers advanced problem-solving capabilities and broader general knowledge, making it more accurate than its predecessors. It excels in areas like creativity, visual input, and longer context, handling over 25,000 words of text for various applications. All those features are in the waiting list.\nIn terms of performance, GPT-4 scores higher approximate percentiles among test-takers in the Uniform Bar Exam and Biology Olympiad compared to ChatGPT.\nSafety and alignment improvements in GPT-4 include training with human feedback, continuous improvement from real-world use, and GPT-4-assisted safety research.\nVarious organizations have collaborated with OpenAI to build innovative products using GPT-4, including Duolingo, Be My Eyes, Stripe, Morgan Stanley, Khan Academy, and the Government of Iceland.\nDespite its impressive capabilities, GPT-4 still has known limitations, such as social biases, hallucinations, and adversarial prompts. OpenAI is committed to addressing these issues and promoting transparency, user education, and AI literacy. GPT-4 is available on ChatGPT Plus and as an API for developers to build applications and services. OpenAI is excited to see how people utilize GPT-4 as they work towards developing empowering technologies.\nGPT-3.5-turbo The engine that is currently used in the ChatGPT Demo without the ChatGPT plus.\nDraft an email or other piece of writing Write Python code Answer questions about a set of documents Create conversational agents Give your software a natural language interface Tutor in a range of subjects Translate languages Simulate characters for video games and much more Choosing between GPT-4 and GPT-3.5-turbo comes down to quality, latency, and budget: GPT-4 is stronger on difficult reasoning and long context, while GPT-3.5-turbo remains the workhorse for many chat and tooling scenarios. When you model costs, combine expected tokens per turn, traffic, and rate limits—especially if you cap messages per user per hour.","date":"2023-04-10","date_unix":1681135200,"id":"https://antoineboucher.info/CV/blog/posts/gpt4-api-costs-overview/","permalink":"https://antoineboucher.info/CV/blog/posts/gpt4-api-costs-overview/","post_kind":"article","section":"posts","summary":"Notes on GPT-4 and GPT-3.5-turbo use cases, strengths, and thinking about chatbot API costs at scale.","tag_refs":[{"name":"AI","permalink":"https://antoineboucher.info/CV/blog/tags/ai/"},{"name":"ChatGPT","permalink":"https://antoineboucher.info/CV/blog/tags/chatgpt/"},{"name":"OpenAI","permalink":"https://antoineboucher.info/CV/blog/tags/openai/"},{"name":"API","permalink":"https://antoineboucher.info/CV/blog/tags/api/"},{"name":"NLP","permalink":"https://antoineboucher.info/CV/blog/tags/nlp/"}],"tags":["AI","ChatGPT","OpenAI","API","NLP"],"tags_text":"AI ChatGPT OpenAI API NLP","thumb":"/CV/blog/images/post-kind-article.png","title":"GPT-4 vs GPT-3.5 — capabilities and API cost framing"},{"content":"Introduction In today\u0026rsquo;s digital world, having an online resume is crucial for showcasing your professional profile. One effective way to create an online resume is by using the JSON Resume npm package. This package allows you to write your resume in JSON and then export it to various formats such as HTML, PDF, or even integrate it into your personal website.\nJSON Resume Format JSON Resume is a community-driven open-source initiative to create a JSON-based standard for resumes. The format is lightweight and easy to use, making it perfect for building tools around it.\nHere\u0026rsquo;s a breakdown of the key sections in a typical JSON Resume:\nThe Basics Section This section contains your basic information like name, job title, contact information, and a brief summary. It also includes your location and links to your professional profiles.\n\u0026#34;basics\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;Your Name\u0026#34;, \u0026#34;label\u0026#34;: \u0026#34;Job Title\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;your.email@example.com\u0026#34;, \u0026#34;website\u0026#34;: \u0026#34;https://yourwebsite.com\u0026#34;, \u0026#34;summary\u0026#34;: \u0026#34;A brief summary about yourself.\u0026#34;, \u0026#34;location\u0026#34;: { \u0026#34;city\u0026#34;: \u0026#34;City\u0026#34;, \u0026#34;region\u0026#34;: \u0026#34;Region\u0026#34;, \u0026#34;countryCode\u0026#34;: \u0026#34;Country Code\u0026#34; }, \u0026#34;profiles\u0026#34;: [ { \u0026#34;network\u0026#34;: \u0026#34;LinkedIn\u0026#34;, \u0026#34;username\u0026#34;: \u0026#34;yourusername\u0026#34;, \u0026#34;url\u0026#34;: \u0026#34;https://www.linkedin.com/in/yourusername/\u0026#34; } ] } Work Experience This section details your professional experience. You can include the company name, your position, the period of employment, and a summary of your role and achievements.\n\u0026#34;work\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;Company Name\u0026#34;, \u0026#34;position\u0026#34;: \u0026#34;Your Position\u0026#34;, \u0026#34;startDate\u0026#34;: \u0026#34;YYYY-MM-DD\u0026#34;, \u0026#34;endDate\u0026#34;: \u0026#34;YYYY-MM-DD\u0026#34;, \u0026#34;summary\u0026#34;: \u0026#34;Description of your role.\u0026#34; } ] Education List your academic qualifications, the institution, the degree or course, and the period of study.\n\u0026#34;education\u0026#34;: [ { \u0026#34;institution\u0026#34;: \u0026#34;University Name\u0026#34;, \u0026#34;area\u0026#34;: \u0026#34;Field of Study\u0026#34;, \u0026#34;studyType\u0026#34;: \u0026#34;Degree Type\u0026#34;, \u0026#34;startDate\u0026#34;: \u0026#34;YYYY-MM-DD\u0026#34;, \u0026#34;endDate\u0026#34;: \u0026#34;YYYY-MM-DD\u0026#34; } ] Skills Mention your skills along with the proficiency level and relevant keywords.\n\u0026#34;skills\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;Programming\u0026#34;, \u0026#34;level\u0026#34;: \u0026#34;Intermediate\u0026#34;, \u0026#34;keywords\u0026#34;: [\u0026#34;Python\u0026#34;, \u0026#34;JavaScript\u0026#34;] } ] Projects Highlight significant projects you\u0026rsquo;ve worked on, including the project name, duration, a brief description, and a URL if available.\n\u0026#34;projects\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;Project Name\u0026#34;, \u0026#34;startDate\u0026#34;: \u0026#34;YYYY-MM-DD\u0026#34;, \u0026#34;endDate\u0026#34;: \u0026#34;YYYY-MM-DD\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Project description.\u0026#34;, \u0026#34;url\u0026#34;: \u0026#34;https://projecturl.com\u0026#34; } ] Using the npm Package Install resume-cli globally: This command line tool allows you to export your resume in different formats.\nnpm install -g resume-cli Create your resume.json: Follow the JSON Resume schema to create your resume.\nExport your resume: Use the CLI to export your resume to different formats.\nresume export resume.html resume export resume.pdf Hosting: You can host your JSON resume for free using the JSON Resume Registry.\nThe JSON Resume npm package provides a standardized, flexible, and easy way to create and share your professional profile. By following the JSON schema and using the CLI tool, you can generate a modern and attractive resume that can be shared with potential employers and contacts.\nAdditional Resources for Enhancing Your Online Resume with Hugo and JSON Resume When creating a dynamic and engaging online resume using Hugo and the JSON Resume module, there are additional resources that can enhance your experience and provide more customization options. Here\u0026rsquo;s a list of valuable resources you might find helpful:\nProfile Studio Profile Studio is an online tool that allows you to preview and customize your JSON Resume in real-time. It\u0026rsquo;s a great way to see how your resume will look and make adjustments on the fly.\nPreview your resume: Profile Studio Preview SkillSet Visualization SkillSet is an innovative tool that visualizes the skills section of your JSON Resume. It uses D3.js to create an intuitive and interactive display of your abilities and expertise.\nVisualize your skills: SkillSet LinkedIn to JSON Resume Exporter This handy tool allows you to export your LinkedIn profile into the JSON Resume format. It\u0026rsquo;s a quick way to transfer your professional data into a structured resume format.\nExport LinkedIn profile: LinkedIn to JSON Resume Exporter Hugo-Mod-JSON-Resume This Hugo module is essential for integrating JSON Resume into your Hugo site. It supports multilingual data and offers templates for various sections of your resume.\nIntegrate JSON Resume with Hugo: Hugo-Mod-JSON-Resume Utilizing these resources, you can effectively create a visually appealing and interactive online resume that showcases your professional journey in the best light possible.","date":"2022-09-10","date_unix":1662818400,"id":"https://antoineboucher.info/CV/blog/posts/professional-resume-json-resume/","permalink":"https://antoineboucher.info/CV/blog/posts/professional-resume-json-resume/","post_kind":"article","section":"posts","summary":"Use the JSON Resume schema and npm tooling to publish HTML, PDF, or embedded CV data.","tag_refs":[{"name":"JSON Resume","permalink":"https://antoineboucher.info/CV/blog/tags/json-resume/"},{"name":"Npm","permalink":"https://antoineboucher.info/CV/blog/tags/npm/"},{"name":"Career","permalink":"https://antoineboucher.info/CV/blog/tags/career/"},{"name":"HTML","permalink":"https://antoineboucher.info/CV/blog/tags/html/"}],"tags":["JSON Resume","npm","Career","HTML"],"tags_text":"JSON Resume npm Career HTML","thumb":"/CV/blog/images/post-kind-article.png","title":"Creating a professional résumé with JSON Resume"},{"content":"Github: https://lnkd.in/en3dSVuQ\nPlugin URL: https://lnkd.in/exVNZMnT\nD2COpenAIPlugin is a plugin for ChatGPT that enables users to generate diagrams using PlantUML, Mermaid, D2. This plugin enhances the capabilities of ChatGPT by providing a seamless way to create diverse and creative diagrams.\nFor a prompt-in-the-chat workflow (AIPRM template, cache hit/miss sequence examples, and canvas-tool tips), see Diagram prompts with ChatGPT and AIPRM — complementary to this plugin-based approach.\n🤖 ChatGPT UML Plugins - DEMO\nUML Diagram Support: Generate UML diagrams on the fly to visually represent your AI models, making it easier to understand and communicate complex ideas. Extensive Documentation: A comprehensive guide that covers everything from installation to usage, ensuring a smooth experience for developers of all skill levels.\nGitHub: repository\nFeedback and contributions welcome.","date":"2022-09-06","date_unix":1662472800,"id":"https://antoineboucher.info/CV/blog/posts/d2c-openai-diagram-plugin/","permalink":"https://antoineboucher.info/CV/blog/posts/d2c-openai-diagram-plugin/","post_kind":"article","section":"posts","summary":"ChatGPT plugin to generate PlantUML, Mermaid, and D2 diagrams from conversation.","tag_refs":[{"name":"ChatGPT","permalink":"https://antoineboucher.info/CV/blog/tags/chatgpt/"},{"name":"OpenAI","permalink":"https://antoineboucher.info/CV/blog/tags/openai/"},{"name":"Plugin","permalink":"https://antoineboucher.info/CV/blog/tags/plugin/"},{"name":"PlantUML","permalink":"https://antoineboucher.info/CV/blog/tags/plantuml/"},{"name":"Mermaid","permalink":"https://antoineboucher.info/CV/blog/tags/mermaid/"},{"name":"D2","permalink":"https://antoineboucher.info/CV/blog/tags/d2/"}],"tags":["ChatGPT","OpenAI","Plugin","PlantUML","Mermaid","D2"],"tags_text":"ChatGPT OpenAI Plugin PlantUML Mermaid D2","thumb":"https://antoineboucher.info/CV/blog/posts/d2c-openai-diagram-plugin/featured.gif","title":"D2C OpenAI plugin — diagrams with PlantUML, Mermaid, and D2"},{"content":"The AIPRM browser extension gives you reusable prompt templates inside ChatGPT. Combined with a small structured prompt (diagram type, what to draw, why, and which tool), you get consistent output whether you want text-first formats like PlantUML or Mermaid, or a recipe for redrawing the same flow in a canvas tool.\nFull article in French (same slug — you can also switch to FR in the site header).\nAIPRM prompt template (copy and adapt) Fill one line per dimension. You can paste the block below into ChatGPT (with or without AIPRM) and edit the bracketed values.\n[DIAGRAM TYPE] - Sequence | Use Case | Class | Activity | Component | State | Object | Deployment | Timing | Network | Wireframe | Archimate | Gantt | MindMap | WBS | JSON | YAML [ELEMENT TYPE] - Actors | Messages | Objects | Classes | Interfaces | Components | States | Nodes | Edges | Links | Frames | Constraints | Entities | Relationships | Tasks | Events | Modules [PURPOSE] - Communication | Planning | Design | Analysis | Modeling | Documentation | Implementation | Testing | Debugging (optional: add your stack or scenario, e.g. \u0026#34;Communication: React server frontend — FastAPI backend — Redis cache — MongoDB database\u0026#34;) [DIAGRAMMING TOOL] - PlantUML | Mermaid | Draw.io | Lucidchart | Creately | Gliffy Example: sequence diagram for a cached API stack [DIAGRAM TYPE] - Sequence [ELEMENT TYPE] - Messages [PURPOSE] - Communication Frontend React Server - Backend FastAPI - Cache Redis - Database MongoDB [DIAGRAMMING TOOL] - PlantUML Public AIPRM link I published a prompt you can add from the AIPRM library here: AIPRM prompt (LinkedIn). Use it as a starting point, then narrow [PURPOSE] and [DIAGRAMMING TOOL] for your team’s stack and deliverables.\nIntroduction to sequence diagrams Sequence diagrams are a type of UML diagram that show how parts of a system exchange messages over time. They are useful for onboarding, design reviews, and documenting request paths (especially when a cache or database sits on the critical path).\nFrontend–backend communication with caching The figure below is a concrete example: a user request flows through a React frontend and FastAPI backend, with Redis as a cache and MongoDB as the system of record. The source listings that follow include both a cache hit and a cache miss branch.\nPlantUML (cache hit and cache miss) @startuml actor User participant \u0026#34;ReactServer\u0026#34; as RS participant \u0026#34;FastAPIServer\u0026#34; as API participant \u0026#34;RedisCache\u0026#34; as R database \u0026#34;MongoDB\u0026#34; as M User -\u0026gt; RS: Sends Request RS -\u0026gt; API: Forwards Request alt Cache hit API -\u0026gt; R: Check Cache R --\u0026gt; API: Found Data API -\u0026gt; RS: Sends Response from Cache RS -\u0026gt; User: Returns Response from Cache else Cache miss API -\u0026gt; R: Get Data from Cache R --\u0026gt; API: Data Not Found API -\u0026gt; M: Get Data from DB M --\u0026gt; API: Returns Data API -\u0026gt; R: Save Data in Cache R --\u0026gt; API: Data Saved API -\u0026gt; RS: Sends Response RS -\u0026gt; User: Returns Response end @enduml Mermaid (cache hit and cache miss) sequenceDiagram actor User participant ReactServer participant FastAPIServer participant RedisCache participant MongoDB User-\u0026gt;\u0026gt;ReactServer: Sends Request ReactServer-\u0026gt;\u0026gt;FastAPIServer: Forwards Request alt Cache hit FastAPIServer-\u0026gt;\u0026gt;RedisCache: Check Cache RedisCache--\u0026gt;\u0026gt;FastAPIServer: Found Data FastAPIServer-\u0026gt;\u0026gt;ReactServer: Sends Response from Cache ReactServer-\u0026gt;\u0026gt;User: Returns Response from Cache else Cache miss FastAPIServer-\u0026gt;\u0026gt;RedisCache: Get Data from Cache RedisCache--\u0026gt;\u0026gt;FastAPIServer: Data Not Found FastAPIServer-\u0026gt;\u0026gt;MongoDB: Get Data from DB MongoDB--\u0026gt;\u0026gt;FastAPIServer: Returns Data FastAPIServer-\u0026gt;\u0026gt;RedisCache: Save Data in Cache RedisCache--\u0026gt;\u0026gt;FastAPIServer: Data Saved FastAPIServer-\u0026gt;\u0026gt;ReactServer: Sends Response ReactServer-\u0026gt;\u0026gt;User: Returns Response end Draw.io, Lucidchart, Creately, and Gliffy These tools are canvas-first: the fastest path is often to generate PlantUML or Mermaid in ChatGPT, then:\nExport from a PlantUML server or CLI to SVG or PNG and import that graphic into your diagram tool as a baseline layer, or Ask ChatGPT (using your template) for a numbered list of lifelines and messages in order, and recreate them with the tool’s shapes and connectors. That avoids blank-canvas syndrome while keeping the diagram editable for styling and annotations your team expects.\nConclusion Structured prompts make diagramming repeatable: you choose the diagram type, the elements to emphasize, the purpose (and context), and the tool so the model’s answer matches how you will ship the artifact. AIPRM simply makes that workflow one click away once the template lives in your library.\nHashtags for sharing: #ChatGPT #diagram #UML #software #designtools #PlantUML #Mermaid #Drawio #Lucidchart #Creately #Gliffy","date":"2022-09-06","date_unix":1662472800,"id":"https://antoineboucher.info/CV/blog/posts/chatgpt-airprm-sequence-diagrams/","permalink":"https://antoineboucher.info/CV/blog/posts/chatgpt-airprm-sequence-diagrams/","post_kind":"article","section":"posts","summary":"AIPRM prompt template for diagram type, elements, purpose, and tool — plus PlantUML and Mermaid sequence examples (React, FastAPI, Redis, MongoDB, cache hit/miss).","tag_refs":[{"name":"ChatGPT","permalink":"https://antoineboucher.info/CV/blog/tags/chatgpt/"},{"name":"AIPRM","permalink":"https://antoineboucher.info/CV/blog/tags/aiprm/"},{"name":"PlantUML","permalink":"https://antoineboucher.info/CV/blog/tags/plantuml/"},{"name":"Mermaid","permalink":"https://antoineboucher.info/CV/blog/tags/mermaid/"},{"name":"UML","permalink":"https://antoineboucher.info/CV/blog/tags/uml/"},{"name":"Diagram Tools","permalink":"https://antoineboucher.info/CV/blog/tags/diagram-tools/"}],"tags":["ChatGPT","AIPRM","PlantUML","Mermaid","UML","Diagram Tools"],"tags_text":"ChatGPT AIPRM PlantUML Mermaid UML Diagram Tools","thumb":"https://antoineboucher.info/CV/blog/posts/chatgpt-airprm-sequence-diagrams/featured_hu_24e71c4c1e6bb9a2.jpeg","title":"Diagram prompts with ChatGPT and AIPRM (PlantUML, Mermaid, and more)"},{"content":"I recently took part in Expo Manger Santé 2023 at Place des Congrès in Montreal. Event photos were taken by OS7Media (os7mediamatrix@gmail.com) — thank you for the shots used here.\nA Successful Sales Endeavor As a salesman, I am passionate about the rich, savory taste of olives and their health benefits. Over two days, I had the opportunity to share this passion with attendees, which translated into remarkable sales, netting $300. It was not just about the sales, though; it was about the connections made and the stories shared over the love of olives.\nA Feast for the Senses The expo was a haven for anyone with an appreciation for fresh produce. I indulged in a variety of fruits and vegetables, but the Gen V lettuces were particularly memorable for their crisp freshness.\nExploring High-End Natural Products Among the many exhibitors, BKind products stood out for their premium quality, although their prices matched their upscale image. It\u0026rsquo;s always intriguing to see the range of products that align with a healthier lifestyle.\nNeighbors Worth Noting Adjacent to my kiosk was Mate Libre, a brand that left a lasting impression with their delicious drinks. As a fan of maté, I was delighted by their offerings. The matéina drink, in particular, was a standout for its unique flavor profile.\nMorning Rush Solutions A discovery that I\u0026rsquo;m eager to incorporate into my routine is the Wise smoothie mix, both in green and red varieties. They\u0026rsquo;re perfect for those busy mornings when you need a nutritious boost on the go.\nIn conclusion Expo Manger Santé 2023 was a celebration of health, taste, and community. I’m grateful for the experience and look forward to the next edition.","date":"2022-09-06","date_unix":1662472800,"id":"https://antoineboucher.info/CV/blog/posts/expo-manger-sante-2023/","permalink":"https://antoineboucher.info/CV/blog/posts/expo-manger-sante-2023/","post_kind":"conference","section":"posts","summary":"Two days at Montreal’s health-food expo — sales on the olive kiosk, neighbors like Mate Libre, and standout products.","tag_refs":[{"name":"Expo Manger Santé","permalink":"https://antoineboucher.info/CV/blog/tags/expo-manger-sant%C3%A9/"},{"name":"Montreal","permalink":"https://antoineboucher.info/CV/blog/tags/montreal/"},{"name":"Conference","permalink":"https://antoineboucher.info/CV/blog/tags/conference/"},{"name":"Food","permalink":"https://antoineboucher.info/CV/blog/tags/food/"},{"name":"Photography","permalink":"https://antoineboucher.info/CV/blog/tags/photography/"}],"tags":["Expo Manger Santé","Montreal","Conference","Food","Photography"],"tags_text":"Expo Manger Santé Montreal Conference Food Photography","thumb":"https://antoineboucher.info/CV/blog/posts/expo-manger-sante-2023/featured_hu_10f9749020afe37b.jpeg","title":"Expo Manger Santé 2023 — olives, kiosks, and discoveries"},{"content":"At Cédille, we hosted a session with GitHub and Arctiq focused on GitHub Copilot and AI-assisted development. Highlights included Copilot Chat with /createNotebook for quick Jupyter notebooks from existing code, and pointers to GitHub Next experiments.\nThanks to speakers Thierry Madkaud and Eldrick Wega, and to everyone who joined.\nFull article in French (same slug — you can also switch to FR in the header).","date":"2022-09-06","date_unix":1662472800,"id":"https://antoineboucher.info/CV/blog/posts/github-copilot-cedille-session/","permalink":"https://antoineboucher.info/CV/blog/posts/github-copilot-cedille-session/","post_kind":"conference","section":"posts","summary":"Short English summary of a school/industry session on Copilot Chat, notebooks, and GitHub Next — full write-up in French.","tag_refs":[{"name":"GitHub Copilot","permalink":"https://antoineboucher.info/CV/blog/tags/github-copilot/"},{"name":"Conference","permalink":"https://antoineboucher.info/CV/blog/tags/conference/"},{"name":"Cédille","permalink":"https://antoineboucher.info/CV/blog/tags/c%C3%A9dille/"},{"name":"Arctiq","permalink":"https://antoineboucher.info/CV/blog/tags/arctiq/"},{"name":"AI","permalink":"https://antoineboucher.info/CV/blog/tags/ai/"}],"tags":["GitHub Copilot","Conference","Cédille","Arctiq","AI"],"tags_text":"GitHub Copilot Conference Cédille Arctiq AI","thumb":"https://antoineboucher.info/CV/blog/posts/github-copilot-cedille-session/featured_hu_4172ab0040a50f2f.jpeg","title":"GitHub Copilot session at Cédille (with GitHub \u0026 Arctiq)"},{"content":"These notes come from comparing options for website live chat, chatbots, and a shared support inbox. The products below are not interchangeable: some are full communications stacks, others are marketing automation, and one is an open-source helpdesk. Pricing, channels, and features change often—treat this as orientation, then confirm on each vendor’s site.\n3CX 3CX is primarily UCaaS / PBX (phones, meetings, extensions). Its web live chat and related widgets sit in that same ecosystem, which helps if you already route voice and chat through 3CX and want one vendor for queues and agents.\nFor Live Chat and Talk (including CMS plugins such as WordPress), follow the current steps in the official docs rather than a static checklist—wizard text and integration names move between releases. Start from the 3CX documentation.\nI also kept separate notes in French for configuring the Live Chat and Talk plugin; those mirror whatever the official guide said at the time and should be validated against the docs above.\nRelated: analytics and bots (not 3CX-specific) This is only tangential to website live chat, but useful if you are exploring Microsoft-side bot patterns with analytics:\nYouTube — Microsoft ChatBot (Power BI) ManyChat ManyChat is built for conversational marketing and automation, with a strong tilt toward Meta (Instagram/Facebook) and growth workflows: broadcasts, sequences, and lead capture.\nGood fit when the goal is campaigns and funnel automation on social, less so when you need a neutral, multi-channel helpdesk with deep ticketing and SLAs across email, chat, and phone in one open-core product.\nManyChat pricing Kommunicate Kommunicate targets human + bot collaboration: bot handles the first turn, then hands off to agents on the website and common messaging channels. It is a managed SaaS—you integrate and configure rather than operating the stack yourself.\nUseful when you want dialogflow-style bot wiring and a polished agent desk without self-hosting; compare total cost and data residency to self-hosted options if that matters for your org.\nChatwoot Chatwoot is an open-source customer engagement suite (license: AGPL). You can use Chatwoot Cloud or self-host (Docker and other paths are documented for operators).\nConceptually, it is closer to “shared inbox + omnichannel conversations” than to a PBX or a pure marketing bot:\nUnified inbox for web widget, email, and other channels (exact channel set evolves—see their docs). Teams, labels, automation, and conversation history aimed at support and sales follow-up. APIs and webhooks for integrations and custom workflows. Trade-offs: self-hosting gives control and can reduce per-seat SaaS cost, but you own backups, upgrades, and security. Feature depth versus large proprietary suites varies by channel; verify what you need (e.g. voice, specific CRMs) against their roadmap and docs.\nLinks:\nChatwoot on GitHub Chatwoot (product overview) Chatwoot developer documentation (API, setup, and self-hosting guides) Quick comparison Primary focus Typical deployment Open source 3CX Telephony + UC + web chat in one stack Cloud or on-prem PBX + agents No ManyChat Marketing automation, often Meta-first SaaS No Kommunicate Bot + human handoff, managed CX SaaS No Chatwoot Omnichannel inbox, support-oriented SaaS (cloud) or self-hosted Yes (AGPL) Resources 3CX documentation ManyChat pricing Kommunicate Chatwoot Chatwoot — GitHub Chatwoot — developer docs YouTube — Microsoft ChatBot (Power BI) ","date":"2022-09-06","date_unix":1662472800,"id":"https://antoineboucher.info/CV/blog/posts/livechat-platform-notes/","permalink":"https://antoineboucher.info/CV/blog/posts/livechat-platform-notes/","post_kind":"article","section":"posts","summary":"How 3CX, ManyChat, Kommunicate, and Chatwoot differ for website chat, bots, and team inboxes—plus links and a compact comparison table.","tag_refs":[{"name":"Live Chat","permalink":"https://antoineboucher.info/CV/blog/tags/live-chat/"},{"name":"Chatbot","permalink":"https://antoineboucher.info/CV/blog/tags/chatbot/"},{"name":"Customer Support","permalink":"https://antoineboucher.info/CV/blog/tags/customer-support/"},{"name":"Open Source","permalink":"https://antoineboucher.info/CV/blog/tags/open-source/"},{"name":"3CX","permalink":"https://antoineboucher.info/CV/blog/tags/3cx/"},{"name":"Chatwoot","permalink":"https://antoineboucher.info/CV/blog/tags/chatwoot/"}],"tags":["Live Chat","Chatbot","Customer Support","Open Source","3CX","Chatwoot"],"tags_text":"Live Chat Chatbot Customer Support Open Source 3CX Chatwoot","thumb":"/CV/blog/images/post-kind-article.png","title":"Live chat and support platforms compared (3CX, ManyChat, Kommunicate, Chatwoot)"},{"content":"I\u0026rsquo;ve recently embarked on a journey to overhaul my home network setup. The transition from Draw.io to PlantUML C4 for creating deployment diagrams has been a game-changer. 🏡\nPlantUML C4 offers a text-based 📝approach that integrates seamlessly with version control systems, making it an ideal tool for infrastructure as code (IaC)🏗️ .\nI am also moving to Cloudflare for DNS management ✅ I gonna also use Terraform and Github action as CD🔁\nOn the virtualization front, I am now using Proxmox and XCP-ng as hypervisors, with Talos OS powering my Kubernetes deployments from my personal project. 💻","date":"2022-09-06","date_unix":1662472800,"id":"https://antoineboucher.info/CV/blog/posts/home-network-plantuml-c4/","permalink":"https://antoineboucher.info/CV/blog/posts/home-network-plantuml-c4/","post_kind":"article","section":"posts","summary":"Moving home-network diagrams to PlantUML C4, Cloudflare DNS, Terraform, and Kubernetes on Proxmox / XCP-ng.","tag_refs":[{"name":"PlantUML","permalink":"https://antoineboucher.info/CV/blog/tags/plantuml/"},{"name":"C4 Model","permalink":"https://antoineboucher.info/CV/blog/tags/c4-model/"},{"name":"Homelab","permalink":"https://antoineboucher.info/CV/blog/tags/homelab/"},{"name":"Kubernetes","permalink":"https://antoineboucher.info/CV/blog/tags/kubernetes/"},{"name":"Proxmox","permalink":"https://antoineboucher.info/CV/blog/tags/proxmox/"},{"name":"Terraform","permalink":"https://antoineboucher.info/CV/blog/tags/terraform/"},{"name":"Cloudflare","permalink":"https://antoineboucher.info/CV/blog/tags/cloudflare/"}],"tags":["PlantUML","C4 Model","Homelab","Kubernetes","Proxmox","Terraform","Cloudflare"],"tags_text":"PlantUML C4 Model Homelab Kubernetes Proxmox Terraform Cloudflare","thumb":"https://antoineboucher.info/CV/blog/posts/home-network-plantuml-c4/featured_hu_e114dedb52092234.jpeg","title":"Network architecture — Lucidchart to PlantUML C4"},{"content":"Notes from the Run:ai webinar on running and scaling inference workloads on AWS (Americas). Run:ai focuses on scheduling, visibility, and efficiency for GPU-backed models in shared environments.\nDashboard Overview of jobs and resource usage.\nCLI Command-line operations and automation.\nModels and load Workload management Infrastructure view Demo Challenges For product details, see the official Run:ai documentation and AWS marketplace or partner listings.","date":"2022-09-06","date_unix":1662472800,"id":"https://antoineboucher.info/CV/blog/posts/runai-aws-inference-webinar/","permalink":"https://antoineboucher.info/CV/blog/posts/runai-aws-inference-webinar/","post_kind":"conference","section":"posts","summary":"Sketches from the Run:ai webinar on scaling ML inference on AWS — dashboards, CLI, and cluster view.","tag_refs":[{"name":"Run:ai","permalink":"https://antoineboucher.info/CV/blog/tags/runai/"},{"name":"AWS","permalink":"https://antoineboucher.info/CV/blog/tags/aws/"},{"name":"Machine Learning","permalink":"https://antoineboucher.info/CV/blog/tags/machine-learning/"},{"name":"Inference","permalink":"https://antoineboucher.info/CV/blog/tags/inference/"},{"name":"Kubernetes","permalink":"https://antoineboucher.info/CV/blog/tags/kubernetes/"},{"name":"Conference","permalink":"https://antoineboucher.info/CV/blog/tags/conference/"}],"tags":["Run:ai","AWS","Machine Learning","Inference","Kubernetes","Conference"],"tags_text":"Run:ai AWS Machine Learning Inference Kubernetes Conference","thumb":"https://antoineboucher.info/CV/blog/posts/runai-aws-inference-webinar/featured_hu_cd252b8aa2faa5de.jpeg","title":"Run:ai on AWS — webinar notes (inference \u0026 autoscaling)"},{"content":"Updated April 2026 with current Lens Insights figures.\nTo date, my Snapchat lenses have accumulated 6.21M plays, 12.11M views, 616.4k shares, and 6,893 favorites (all-time, Lens Insights). That started as a personal interest in AR filters and grew into paid work on Fiverr alongside my own experiments.\nBetween 2017 and 2020 I shipped 42 lenses for myself and clients. A few that carried the most usage include Go Crazy Facetime (~2.9M plays), Face Ghosting (~1.2M plays), and BIG SMILE (~520k plays).\nSnapchat’s audience tools also frame the scale of the ecosystem: a reported ~596M–623M potential audience for lens users, with strong reach in markets such as India and the United States, and device mix roughly Android (~70%) vs iOS (~30%)—a useful nudge to optimize for real hardware, not just the phone on your desk.\nNot every submission cleared review—both Snapchat and clients sometimes said no—but those passes became feedback loops that made the next lens better.\nI wrote a small utility to turn GIF or video into PNG sequences for pipeline work; other Lens Studio creators picked it up too. GIF/Video to PNG for Lens Studio\nIt’s been rewarding to turn AR filters into a side hustle. I also spoke with Snapchat by phone to share Lens Studio product feedback and help improve the tool for creators.\nI want to go deeper on Meta Spark next. The official learning hub is here: Spark AR / Meta Spark learn—if you know tutorials that translate well from a Lens Studio mindset, I’d love your recommendations.","date":"2022-09-06","date_unix":1662472800,"id":"https://antoineboucher.info/CV/blog/posts/snapchat-lens-creator/","permalink":"https://antoineboucher.info/CV/blog/posts/snapchat-lens-creator/","post_kind":"article","section":"posts","summary":"Snapchat lenses with multi-million plays and views (2017–2020), Fiverr clients, tooling, and Lens Studio feedback.","tag_refs":[{"name":"Snapchat","permalink":"https://antoineboucher.info/CV/blog/tags/snapchat/"},{"name":"Lens Studio","permalink":"https://antoineboucher.info/CV/blog/tags/lens-studio/"},{"name":"AR","permalink":"https://antoineboucher.info/CV/blog/tags/ar/"},{"name":"Side Project","permalink":"https://antoineboucher.info/CV/blog/tags/side-project/"}],"tags":["Snapchat","Lens Studio","AR","Side Project"],"tags_text":"Snapchat Lens Studio AR Side Project","thumb":"https://antoineboucher.info/CV/blog/posts/snapchat-lens-creator/featured_hu_c2e6d1427b3f29c6.png","title":"Snapchat Lens Creator"},{"content":"Notes from the Snowflake Data-for-Breakfast conference on the Snowflake Cloud Data Platform, data warehousing, integration, and analytics—including a strong keynote from Infostrux.\nOverview Key takeaways Global data operations A healthcare customer case study showed Snowflake managing secure data operations across three continents, simplifying partner data sharing while keeping high availability and strong SLAs.\nCloud data platforms remain a practical backbone for consolidation and analytics; this event was a useful snapshot of where Snowflake is heading.\nTax sector analytics Another customer needed to store very large datasets and analyze them without knowing every question in advance. Snowflake gave them a single place to consolidate and transform data, which sped up troubleshooting and improved visibility into lineage.\nData clean rooms The clean-room model—collaborating on shared data while preserving privacy—came up as relevant when two companies need to compare datasets during due diligence without exposing everything.\nClosing thoughts Snowflake Data-for-Breakfast was a worthwhile look at cloud data platforms, operations at scale, and newer patterns like clean rooms. Worth attending if you work in data management or analytics.","date":"2022-09-06","date_unix":1662472800,"id":"https://antoineboucher.info/CV/blog/posts/snowflake-data-for-breakfast/","permalink":"https://antoineboucher.info/CV/blog/posts/snowflake-data-for-breakfast/","post_kind":"conference","section":"posts","summary":"Notes from the Snowflake Data-for-Breakfast conference on the Snowflake Cloud Data Platform, data warehousing, integration, and analytics—including a strong keynote from Infostrux.","tag_refs":[{"name":"Conference","permalink":"https://antoineboucher.info/CV/blog/tags/conference/"},{"name":"Snowflake","permalink":"https://antoineboucher.info/CV/blog/tags/snowflake/"},{"name":"Data Analytics","permalink":"https://antoineboucher.info/CV/blog/tags/data-analytics/"}],"tags":["Conference","Snowflake","Data Analytics"],"tags_text":"Conference Snowflake Data Analytics","thumb":"https://antoineboucher.info/CV/blog/posts/snowflake-data-for-breakfast/featured_hu_ec748499759ede8b.jpeg","title":"Snowflake Data-for-Breakfast Conference Insights"},{"content":"Inspiration from Bryan Johnson’s \u0026ldquo;Blueprint Protocol\u0026rdquo; Personal health tracking, for me, started with Bryan Johnson’s \u0026ldquo;Blueprint Protocol\u0026rdquo;—a push for self-quantification that matched how I already thought about fitness. I wanted the same granularity for my own body, and a Renpho scale with bio-impedance turned out to be a practical way to get a steady stream of numbers beyond simple weight.\nForking hass-renpho and the Home Assistant ecosystem I found hass-renpho, a custom integration that pulls Renpho scale data into Home Assistant. The project had gone quiet, and with the original maintainer unavailable I forked it to extend support for more of the metrics the hardware exposes.\nThat fork dragged me through the normal Home Assistant custom-component path: installing via HACS, configuring credentials, and iterating on entities. Along the way I talked with the original maintainer where it made sense—suggesting fixes, sharing what I was seeing in the API, and trying to keep the integration useful for anyone else running a Renpho at home.\nReverse engineering and APKLeaks The mobile app does not publish an official API document, so the next step was to learn what the Android client actually calls. APKLeaks scans the packaged APK for strings—URLs, keys, and other clues—rather than fully decompiling the app into readable source. Running it on the Renpho APK surfaced the HTTP endpoints and enough context to line those calls up with the JSON payloads I cared about (weight, BMI, BMR, body age, fat and muscle estimates, water, protein, visceral-fat indices, and the rest of the bio-impedance-derived fields). In Home Assistant those show up as entities and feed Lovelace cards—gauges for composition, history graphs for weight, and simple entity rows for the “extra” metrics.\n# Simple PyPi installation pip3 install apkleaks # Delving into the source git clone https://github.com/dwisiswant0/apkleaks cd apkleaks/ pip3 install -r requirements.txt Further reading:\nAPKLeaks on GitHub APKLeaks in-depth analysis Dashboard, context, and measurement habits Renpho data is only part of the picture. I still use tools like Google Health and MyFitnessPal for activity and food so the scale readings sit next to diet and movement, not in a vacuum.\nDay-to-day swings taught me to treat the numbers as trends, not verdicts. Clothing alone can move the needle by about a kilogram on a bad day; hydration and digestion matter too. Measuring at a consistent time (for me, mornings, similar conditions) keeps the series usable when I look at the history card in HA.\nThis started as a technical side project and became a steady habit: the dashboard is a single place to see weight trajectory, composition estimates, and the supporting stats the integration exposes—enough to decide whether training or sleep changes are showing up where I expect.\nCommunity and what works for you None of this would be as practical without the Home Assistant and open-source integration ecosystem—forks, issues, and small patches add up. If you are quantifying your own health, I am curious what actually stuck for you: dedicated hardware, phone-only apps, or something self-hosted like this? Share what you use and what you ignore; the useful part is rarely the gadget alone, but how consistently the data fits your routine.","date":"2021-10-10","date_unix":1633874400,"id":"https://antoineboucher.info/CV/blog/posts/renpho-health-api-blueprint/","permalink":"https://antoineboucher.info/CV/blog/posts/renpho-health-api-blueprint/","post_kind":"article","section":"posts","summary":"Forking hass-renpho, surfacing Renpho API endpoints with APKLeaks, and wiring bio-impedance metrics into a Home Assistant Lovelace health dashboard.","tag_refs":[{"name":"Health","permalink":"https://antoineboucher.info/CV/blog/tags/health/"},{"name":"API","permalink":"https://antoineboucher.info/CV/blog/tags/api/"},{"name":"Reverse Engineering","permalink":"https://antoineboucher.info/CV/blog/tags/reverse-engineering/"},{"name":"Home Assistant","permalink":"https://antoineboucher.info/CV/blog/tags/home-assistant/"},{"name":"Home Automation","permalink":"https://antoineboucher.info/CV/blog/tags/home-automation/"}],"tags":["Health","API","Reverse Engineering","Home Assistant","Home Automation"],"tags_text":"Health API Reverse Engineering Home Assistant Home Automation","thumb":"https://antoineboucher.info/CV/blog/posts/renpho-health-api-blueprint/featured_hu_c41549a9ce6c2349.jpeg","title":"Renpho scale, Home Assistant, and reverse-engineering the API"},{"content":"Introduction Welcome to a new chapter in my blog where I dive into the intricacies of building a robust home networking system. As a software engineer with a passion for networking protocols and efficient computing, I\u0026rsquo;ve embarked on a journey to design a system that balances performance, security, and cost-effectiveness. This post will detail my experiences and the technical decisions I made along the way.\nEmbracing the Challenge of Home Networking My interest in networking began during my academic years, where I learned about various protocols such as IP, VPN, and IP7. Motivated by the high costs of cloud computing, I set out to build a home-based system. My goal was to use older computers, minimizing expenses on hardware and online services, while still achieving a high degree of functionality and efficiency.\nDocker Containers: A Gateway to Versatility An essential part of my project involved extensive research into Docker containers. I focused on free services that could serve as alternatives to existing cloud services, covering a range of applications from document scanning to home management tasks. This exploration into Docker containers not only allowed me to tailor services to my specific needs but also provided a solid foundation for understanding container-based architecture.\nDeveloping with Bash and Home Servers The heart of my system was a home server setup consisting of two main components: a Dell server purchased for a modest sum and an older computer from 2004. I created multiple bash scripts for various operating systems including Ubuntu, CentOS, and Proxmox. These scripts were instrumental in setting up and managing the servers, demonstrating the power of automation and scripting in a home network environment.\nOvercoming Obstacles and Learning Maintaining this system presented its fair share of challenges. I quickly learned the importance of specialized virtualization software for such projects. This realization led me to use a server specifically designed for virtualization tasks, streamlining the process and enhancing the system\u0026rsquo;s overall stability and performance.\nRemote Management and Security An essential aspect of my setup was the ability to manage computers remotely and monitor the health of services and hardware. I implemented a reverse proxy using Caddy, and for added security, I hid my IP behind an OVH server. This setup not only protected my network from potential hacking attempts but also provided a way to manage traffic effectively.\nWireGuard: A VPN Solution For VPN, I chose WireGuard. Despite its initial complexity in configuration, WireGuard offered a fast, secure, and reliable way to connect my network. I contributed to several projects to simplify its setup, making it more accessible for less tech-savvy users.\nExpanding the Network Upon moving to a new apartment, I expanded my network to include multiple locations. I used tools like Lucidchart to visualize my network architecture and Proxmox to create numerous VMs. This expansion was not just a technical upgrade but also an opportunity to share my knowledge with others, as I used my setup in a club project.\nFuture Projects and Reflections Looking ahead, I am considering migrating to a static website hosted on AWS S3 to reduce deployment costs. Furthermore, I\u0026rsquo;m exploring the use of Github for personal projects, appreciating its free and open nature for individual projects.\nConclusion This journey in home networking has been a blend of personal passion and professional development. Through this process, I\u0026rsquo;ve learned the importance of balancing performance, security, and cost. My experience demonstrates that with the right knowledge and tools, creating an efficient home networking system is not only feasible but also incredibly rewarding.","date":"2021-09-06","date_unix":1630936800,"id":"https://antoineboucher.info/CV/blog/posts/home-networking-evolution/","permalink":"https://antoineboucher.info/CV/blog/posts/home-networking-evolution/","post_kind":"article","section":"posts","summary":"Docker on repurposed hardware, bash automation, Caddy, WireGuard, Proxmox VMs, and diagramming the setup.","tag_refs":[{"name":"Networking","permalink":"https://antoineboucher.info/CV/blog/tags/networking/"},{"name":"Homelab","permalink":"https://antoineboucher.info/CV/blog/tags/homelab/"},{"name":"Docker","permalink":"https://antoineboucher.info/CV/blog/tags/docker/"},{"name":"WireGuard","permalink":"https://antoineboucher.info/CV/blog/tags/wireguard/"},{"name":"Proxmox","permalink":"https://antoineboucher.info/CV/blog/tags/proxmox/"},{"name":"Caddy","permalink":"https://antoineboucher.info/CV/blog/tags/caddy/"}],"tags":["Networking","Homelab","Docker","WireGuard","Proxmox","Caddy"],"tags_text":"Networking Homelab Docker WireGuard Proxmox Caddy","thumb":"https://antoineboucher.info/CV/blog/posts/home-networking-evolution/featured_hu_1afa95bb5ff7362e.png","title":"Networking evolution — building a home network lab"},{"content":"This walkthrough is from the jQuery-and-Moment era: you wrap two text inputs in a custom element \u0026lt;daterangepicker-two-input\u0026gt;, register it with customElements.define, and hand the container off to the Date Range Picker plugin. Newer stacks usually reach for native \u0026lt;input type=\u0026quot;date\u0026quot;\u0026gt;, flatpickr, or framework date components — but plenty of dashboards shipped in the mid-2010s (and many still in maintenance) look exactly like this.\nTutorial: Creating a Custom Date Range Picker Element Introduction You end up with a small reusable tag that opens the familiar range calendar UI (check-in / check-out style) while keeping the markup consistent across pages.\nPrerequisites Basic knowledge of HTML, CSS, and JavaScript jQuery and jQuery UI libraries Date Range Picker plugin Step 1: Setup Basic HTML First, include the necessary libraries in your HTML file\u0026rsquo;s head section:\n\u0026lt;head\u0026gt; \u0026lt;!-- jQuery and jQuery UI --\u0026gt; \u0026lt;script src=\u0026#34;https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.12.1/jquery-ui.min.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;!-- Date Range Picker plugin --\u0026gt; \u0026lt;script src=\u0026#34;https://cdn.jsdelivr.net/momentjs/latest/moment.min.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;https://cdn.jsdelivr.net/npm/daterangepicker/daterangepicker.min.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;link rel=\u0026#34;stylesheet\u0026#34; href=\u0026#34;https://cdn.jsdelivr.net/npm/daterangepicker/daterangepicker.css\u0026#34; /\u0026gt; \u0026lt;link rel=\u0026#34;stylesheet\u0026#34; href=\u0026#34;https://code.jquery.com/ui/1.12.1/themes/base/jquery-ui.css\u0026#34; /\u0026gt; \u0026lt;/head\u0026gt; Step 2: Define Custom Element Structure Create the custom element class in JavaScript:\nclass DaterangepickerDoubleInput extends HTMLElement { constructor() { super(); this.innerHTML = ` \u0026lt;div class=\u0026#34;combine-input-container\u0026#34; id=\u0026#34;combine-input-container\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;c-input-container c1-container\u0026#34;\u0026gt; \u0026lt;input type=\u0026#39;text\u0026#39; class=\u0026#34;c-input c1-input\u0026#34; id=\u0026#34;c1\u0026#34;\u0026gt; \u0026lt;label alt=\u0026#39;Departure\u0026#39; class=\u0026#34;c-label c1-label\u0026#34;\u0026gt;\u0026lt;/label\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;div class=\u0026#34;c-input-container c2-container\u0026#34;\u0026gt; \u0026lt;input type=\u0026#39;text\u0026#39; class=\u0026#34;c-input c2-input\u0026#34; id=\u0026#34;c2\u0026#34;\u0026gt; \u0026lt;label alt=\u0026#39;Return\u0026#39; class=\u0026#34;c-label c2-label\u0026#34;\u0026gt;\u0026lt;/label\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt;`; this.initDateRangePicker(); } initDateRangePicker() { // Initialization code for the Date Range Picker } } window.customElements.define(\u0026#39;daterangepicker-two-input\u0026#39;, DaterangepickerDoubleInput); Step 3: Initialize Date Range Picker In the initDateRangePicker method, initialize the date range picker:\ninitDateRangePicker() { $(\u0026#39;#combine-input-container\u0026#39;).daterangepicker({ // Date Range Picker options here }); // Event handlers for apply and cancel actions } Step 4: Style the Custom Element Use CSS to style your custom element:\n.combine-input-container { /* Your styles here */ } .c-input-container { /* Styles for input containers */ } .c-label { /* Styles for labels */ } .c-input { /* Styles for input fields */ } Step 5: Add Custom Element to HTML Use your custom element in the HTML body:\n\u0026lt;body\u0026gt; \u0026lt;daterangepicker-two-input\u0026gt;\u0026lt;/daterangepicker-two-input\u0026gt; \u0026lt;/body\u0026gt; Step 6: Test and Debug Test your custom element in various browsers to ensure compatibility and fix any bugs that arise.\nConclusion That’s the skeleton: one custom element, the plugin bound to the inner container, and CSS however your product needs it.\nFurther Enhancements Expose picker options as attributes or properties on the element. Tighten validation and locale-specific formats (Moment still handled most of that in this stack). Match whatever design system the rest of the app used in 2019–2021. If you’re maintaining something built this way, the moving parts are still the same: jQuery for DOM/plugin glue, Moment for parsing (the plugin depended on it for years), and the range picker for the actual UI.","date":"2016-02-05","date_unix":1454680800,"id":"https://antoineboucher.info/CV/blog/posts/tutorial-date-range-picker-component/","permalink":"https://antoineboucher.info/CV/blog/posts/tutorial-date-range-picker-component/","post_kind":"tutorial","section":"posts","summary":"Custom `` element with jQuery, Moment.js, and the Date Range Picker plugin — the kind of range UI that dominated Bootstrap-era admin screens around 2016.","tag_refs":[{"name":"JavaScript","permalink":"https://antoineboucher.info/CV/blog/tags/javascript/"},{"name":"JQuery","permalink":"https://antoineboucher.info/CV/blog/tags/jquery/"},{"name":"Date Picker","permalink":"https://antoineboucher.info/CV/blog/tags/date-picker/"},{"name":"Tutorial","permalink":"https://antoineboucher.info/CV/blog/tags/tutorial/"},{"name":"Frontend","permalink":"https://antoineboucher.info/CV/blog/tags/frontend/"}],"tags":["JavaScript","jQuery","Date Picker","Tutorial","Frontend"],"tags_text":"JavaScript jQuery Date Picker Tutorial Frontend","thumb":"/CV/blog/images/post-kind-tutorial.png","title":"Date range picker web component (jQuery + plugin)"},{"content":"This is a straight DOM-and-setInterval countdown: minutes in, tick down, start and reset, and a short audio clip when it hits zero — the sort of thing that showed up in every “learn JavaScript” blog around 2016 before frameworks swallowed the front page.\nTutorial: Building a Custom Countdown Timer Step 1: Setting Up the HTML Structure First, we\u0026rsquo;ll create the basic structure of the timer. This includes input fields for minutes, a display for the countdown, and buttons to start and reset the timer.\n\u0026lt;div class=\u0026#34;Time-option\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;input-group\u0026#34;\u0026gt; \u0026lt;input id=\u0026#34;input\u0026#34; autocomplete=\u0026#34;off\u0026#34; type=\u0026#34;text\u0026#34;/\u0026gt; \u0026lt;label\u0026gt;minutes\u0026lt;/label\u0026gt; \u0026lt;button onclick=\u0026#34;Reset()\u0026#34; class=\u0026#34;btn btn-lg button-refresh\u0026#34;\u0026gt; \u0026lt;span id=\u0026#34;refresh\u0026#34; class=\u0026#34;glyphicon refresh-animate glyphicon-refresh glyphicon-refresh-animate\u0026#34;/\u0026gt; \u0026lt;/button\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;div class=\u0026#34;Time\u0026#34;\u0026gt; \u0026lt;span id=\u0026#34;minutes\u0026#34;\u0026gt;00\u0026lt;/span\u0026gt; \u0026lt;span class=\u0026#34;min\u0026#34;\u0026gt;min\u0026lt;/span\u0026gt; \u0026lt;span id=\u0026#34;seconds\u0026#34;\u0026gt;00\u0026lt;/span\u0026gt; \u0026lt;span class=\u0026#34;sec\u0026#34;\u0026gt;sec\u0026lt;/span\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;audio\u0026gt;\u0026lt;/audio\u0026gt; Step 2: Adding CSS for Styling Next, we\u0026rsquo;ll add CSS to style our timer. This will make the timer more user-friendly and visually appealing.\n/* Add your CSS styling here */ /* Example: */ .Time { font-size: 2em; font-weight: 300; } /* Add styles for inputs, labels, and buttons */ Step 3: JavaScript Functionality Now, we\u0026rsquo;ll add JavaScript to make the timer functional. This includes the countdown logic and the reset functionality.\n$(function() { // Add your jQuery and JavaScript code here // Example: $(\u0026#39;#input\u0026#39;).keypress(function(e) { if (e.which == 13) { // Enter key pressed CheckTick(); } }); }); // Add functions for countdown, CheckTick, and Reset Step 4: Testing and Debugging Test the timer by entering a value and seeing if it counts down correctly. Ensure the audio plays when the timer reaches zero. Test the reset functionality to see if it stops and resets the timer as expected. Step 5: Additional Features and Improvements Add error handling for non-numeric inputs. Implement a visual indicator for when the timer is running. Style the timer to be responsive for better mobile device compatibility. Step 6: Deployment If you\u0026rsquo;re using this timer on a website, embed the HTML, CSS, and JavaScript into the appropriate sections of your webpage. Test the timer in different browsers to ensure cross-browser compatibility. Conclusion You end up with a working countdown, start/reset, and optional alarm. The pattern is old but transparent: easy to adapt, and still fine for static sites or embedded widgets without a build step.","date":"2016-02-01","date_unix":1454335200,"id":"https://antoineboucher.info/CV/blog/posts/tutorial-custom-countdown-timer/","permalink":"https://antoineboucher.info/CV/blog/posts/tutorial-custom-countdown-timer/","post_kind":"tutorial","section":"posts","summary":"Minutes input, start/reset, on-screen display, and optional sound at zero — a small vanilla HTML/CSS/JS timer in the style of mid-2010s tutorials.","tag_refs":[{"name":"JavaScript","permalink":"https://antoineboucher.info/CV/blog/tags/javascript/"},{"name":"HTML","permalink":"https://antoineboucher.info/CV/blog/tags/html/"},{"name":"CSS","permalink":"https://antoineboucher.info/CV/blog/tags/css/"},{"name":"Timer","permalink":"https://antoineboucher.info/CV/blog/tags/timer/"},{"name":"Tutorial","permalink":"https://antoineboucher.info/CV/blog/tags/tutorial/"},{"name":"Frontend","permalink":"https://antoineboucher.info/CV/blog/tags/frontend/"}],"tags":["JavaScript","HTML","CSS","Timer","Tutorial","Frontend"],"tags_text":"JavaScript HTML CSS Timer Tutorial Frontend","thumb":"/CV/blog/images/post-kind-tutorial.png","title":"Custom countdown timer (HTML, CSS, JavaScript)"},{"content":"CodePen demo Introduction This walkthrough builds a 3D wave animation with Three.js: scene, camera, WebGL renderer, a grid of cubes, and a simple undulating motion — the sort of “look, WebGL in the browser” demo that fit right in with 2016 Three.js + CodePen articles. Use the embed above to tweak it live.\nSetting Up the Scene First, let\u0026rsquo;s set up the basic components of any Three.js scene: the scene itself, a camera, and a WebGL renderer. Add the following code to initialize these components:\nlet cubes = []; let noiseOffset = 0; const size = 20; const step = 2; // Initialize Three.js Scene, Camera, Renderer const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 ); const renderer = new THREE.WebGLRenderer(); renderer.setSize(window.innerWidth, window.innerHeight); document.getElementById(\u0026#34;container\u0026#34;).appendChild(renderer.domElement); // Position the Camera camera.position.z = 50; camera.position.y = 20; camera.lookAt(0, 0, 0); This code creates a 3D scene, adds a perspective camera, and sets up the WebGL renderer to display our graphics.\nAdding Cubes with Perlin Noise Now, let\u0026rsquo;s add cubes to our scene. We\u0026rsquo;ll use Perlin noise to vary the Y-position of each cube, creating a wave-like effect. Here\u0026rsquo;s how you can do it:\n// Function to simulate Perlin noise function noise(x, y, z) { return Math.sin(x) * Math.cos(y) * Math.sin(z); } // Create cubes using Perlin noise for (let x = -size; x \u0026lt;= size; x += step) { for (let z = -size; z \u0026lt;= size; z += step) { const y = noise(x * 0.1, noiseOffset, z * 0.1) * 10; const geometry = new THREE.BoxGeometry(); const material = new THREE.MeshBasicMaterial({ color: 0x00ff00 }); const cube = new THREE.Mesh(geometry, material); cube.position.set(x, y, z); scene.add(cube); cubes.push(cube); // Store the cube } noiseOffset += 0.1; } In this snippet, we create multiple cubes and position them based on Perlin noise, giving us the initial setup for our wave animation.\nAnimating the Cubes To animate our cubes, we\u0026rsquo;ll update their Y-position in an animation loop. This continuous update creates the illusion of a moving wave. Here\u0026rsquo;s the code for our animation loop:\n// Animation Loop with Wave Animation let waveOffset = 0; function animate() { requestAnimationFrame(animate); updateCameraPosition(); // Update each cube\u0026#39;s Y position for the wave effect cubes.forEach((cube, i) =\u0026gt; { cube.position.y = noise(cube.position.x * 0.1, waveOffset, cube.position.z * 0.1) * 10; }); waveOffset += 0.01; // Change this value to control the speed of the wave renderer.render(scene, camera); } animate(); The animate function is called repeatedly, updating the position of each cube to simulate a wave.\nAdding Interaction To make our scene interactive, we\u0026rsquo;ll add event listeners for mouse and keyboard inputs. This allows users to control the camera and explore the 3D space.\n// Mouse Controls let isDragging = false; let prevX = 0; let prevY = 0; document.addEventListener(\u0026#34;mousedown\u0026#34;, function (e) { isDragging = true; prevX = e.clientX; prevY = e.clientY; }); document.addEventListener(\u0026#34;mouseup\u0026#34;, function () { isDragging = false; }); document.addEventListener(\u0026#34;mousemove\u0026#34;, function (e) { if (isDragging) { const dx = e.clientX - prevX; const dy = e.clientY - prevY; camera.rotation.y += dx * 0.01; camera.rotation.x += dy * 0.01; prevX = e.clientX; prevY = e.clientY; } }); // Gamepad Controls function gamepadControl() { const gamepads = navigator.getGamepads(); if (gamepads[0]) { const gp = gamepads[0]; camera.position.z -= gp.buttons[0].value * 0.1; camera.position.z += gp.buttons[1].value * 0.1; camera.position.x -= gp.buttons[2].value * 0.1; camera.position.x += gp.buttons[3].value * 0.1; } requestAnimationFrame(gamepadControl); } gamepadControl(); let keyStates = {}; // Keyboard event listeners document.addEventListener(\u0026#34;keydown\u0026#34;, function (event) { keyStates[event.code] = true; }); document.addEventListener(\u0026#34;keyup\u0026#34;, function (event) { keyStates[event.code] = false; }); // Update camera position based on keyboard input function updateCameraPosition() { if (keyStates[\u0026#34;ArrowUp\u0026#34;]) camera.position.z -= 0.1; if (keyStates[\u0026#34;ArrowDown\u0026#34;]) camera.position.z += 0.1; if (keyStates[\u0026#34;ArrowLeft\u0026#34;]) camera.position.x -= 0.1; if (keyStates[\u0026#34;ArrowRight\u0026#34;]) camera.position.x += 0.1; if (keyStates[\u0026#34;KeyW\u0026#34;]) camera.rotation.x -= 0.01; if (keyStates[\u0026#34;KeyS\u0026#34;]) camera.rotation.x += 0.01; if (keyStates[\u0026#34;KeyA\u0026#34;]) camera.rotation.y += 0.01; if (keyStates[\u0026#34;KeyD\u0026#34;]) camera.rotation.y -= 0.01; } // Mouse Scroll Control document.addEventListener(\u0026#34;wheel\u0026#34;, function (e) { camera.position.z += e.deltaY * 0.01; }); These event listeners enable users to rotate the camera and zoom in and out, enhancing the interactive experience.\nWrap up the tutorial with a conclusion that encourages readers to experiment with the code and learn more about Three.js.\nYou now have a basic animated wave in Three.js. Fork the pen and push the motion, materials, or camera — same pipeline people have been iterating on since these kinds of tutorials were the default intro to WebGL in the browser.","date":"2016-01-07","date_unix":1452182400,"id":"https://antoineboucher.info/CV/blog/posts/codepen-threejs-wave-tutorial/","permalink":"https://antoineboucher.info/CV/blog/posts/codepen-threejs-wave-tutorial/","post_kind":"tutorial","section":"posts","summary":"Grid of cubes with a simple wave motion in Three.js — step-by-step CodePen tutorial, typical of the WebGL curiosity posts from ~2016.","tag_refs":[{"name":"Three.js","permalink":"https://antoineboucher.info/CV/blog/tags/three.js/"},{"name":"WebGL","permalink":"https://antoineboucher.info/CV/blog/tags/webgl/"},{"name":"CodePen","permalink":"https://antoineboucher.info/CV/blog/tags/codepen/"},{"name":"JavaScript","permalink":"https://antoineboucher.info/CV/blog/tags/javascript/"},{"name":"Tutorial","permalink":"https://antoineboucher.info/CV/blog/tags/tutorial/"}],"tags":["Three.js","WebGL","CodePen","JavaScript","Tutorial"],"tags_text":"Three.js WebGL CodePen JavaScript Tutorial","thumb":"/CV/blog/images/post-kind-tutorial.png","title":"Three.js 3D wave animation (CodePen tutorial)"},{"content":"CodePen demo Introduction This tutorial walks through a simple interactive blackboard with HTML5 Canvas and plain JavaScript: drawing, pulling in images via the File API, and clearing the board. It matches the kind of step-by-step CodePen write-up that was everywhere around 2016. Use the embed above to see the finished behavior.\nSetting Up the Canvas First, we need to set up the HTML5 canvas element. This is where all the drawing will take place.\n\u0026lt;canvas id=\u0026#34;drawingCanvas\u0026#34;\u0026gt;\u0026lt;/canvas\u0026gt; In your CSS, make sure the canvas takes the full screen and has a dark background to mimic a blackboard:\nhtml, body { width: 100%; height: 100%; overflow: hidden; margin: 0; padding: 0; background: hsla(0, 5%, 5%, 1); } canvas { background: hsla(0, 5%, 5%, 1); } Adding Controls We\u0026rsquo;ll add some basic controls for color selection, pen size, saving the canvas as an image, and an option to erase the canvas.\n\u0026lt;input type=\u0026#34;color\u0026#34; id=\u0026#34;colorPicker\u0026#34; value=\u0026#34;#FFFFFF\u0026#34;\u0026gt; \u0026lt;input type=\u0026#34;range\u0026#34; id=\u0026#34;penSize\u0026#34; min=\u0026#34;1\u0026#34; max=\u0026#34;20\u0026#34; value=\u0026#34;5\u0026#34;\u0026gt; \u0026lt;button id=\u0026#34;saveImage\u0026#34;\u0026gt;Save Image\u0026lt;/button\u0026gt; \u0026lt;button id=\u0026#34;eraseCanvas\u0026#34;\u0026gt;Erase\u0026lt;/button\u0026gt; \u0026lt;input type=\u0026#34;file\u0026#34; id=\u0026#34;imageLoader\u0026#34; name=\u0026#34;imageLoader\u0026#34; accept=\u0026#34;image/*\u0026#34;\u0026gt; Style these controls so they are easily accessible:\n#colorPicker, #penSize, #saveImage, #eraseCanvas, #imageLoader { position: absolute; top: 10px; z-index: 1000; } #colorPicker { right: 40px; } #penSize { right: 120px; } #eraseCanvas { right: 275px; } #saveImage { right: 350px; } #imageLoader { right: 400px; } Implementing the Drawing Logic Now, let\u0026rsquo;s write the JavaScript to handle drawing on the canvas. We\u0026rsquo;ll set up event listeners to handle mouse movements and draw on the canvas.\nlet canvas, ctx; let isDrawing = false, isDragging = false; let curColor = \u0026#39;#FFFFFF\u0026#39;; let lineWidth = 5; let imageObjects = [], drawingObjects = []; let currentDraggingImg = null; window.onload = function() { canvas = document.getElementById(\u0026#39;drawingCanvas\u0026#39;); canvas.width = window.innerWidth; canvas.height = window.innerHeight; ctx = canvas.getContext(\u0026#34;2d\u0026#34;); ctx.lineWidth = lineWidth; canvas.addEventListener(\u0026#39;mousedown\u0026#39;, onMouseDown); canvas.addEventListener(\u0026#39;mousemove\u0026#39;, onMouseMove); canvas.addEventListener(\u0026#39;mouseup\u0026#39;, onMouseUp); document.getElementById(\u0026#39;colorPicker\u0026#39;).addEventListener(\u0026#39;input\u0026#39;, function(e) { curColor = e.target.value; }); document.getElementById(\u0026#39;penSize\u0026#39;).addEventListener(\u0026#39;input\u0026#39;, function(e) { lineWidth = e.target.value; }); document.getElementById(\u0026#39;saveImage\u0026#39;).addEventListener(\u0026#39;click\u0026#39;, saveImage); document.getElementById(\u0026#39;imageLoader\u0026#39;).addEventListener(\u0026#39;change\u0026#39;, loadImage); }; function onMouseDown(e) { const mouseX = e.pageX - canvas.offsetLeft; const mouseY = e.pageY - canvas.offsetTop; currentDraggingImg = null; // Check if an image is being clicked imageObjects.forEach(imgObj =\u0026gt; { if (mouseX \u0026gt;= imgObj.x \u0026amp;\u0026amp; mouseX \u0026lt;= imgObj.x + imgObj.width \u0026amp;\u0026amp; mouseY \u0026gt;= imgObj.y \u0026amp;\u0026amp; mouseY \u0026lt;= imgObj.y + imgObj.height) { imgObj.isDragging = true; currentDraggingImg = imgObj; isDragging = true; } }); if (!currentDraggingImg) { isDrawing = true; const path = { color: curColor, lineWidth: lineWidth, points: [{x: mouseX, y: mouseY}] }; drawingObjects.push(path); } } function onMouseMove(e) { const mouseX = e.pageX - canvas.offsetLeft; const mouseY = e.pageY - canvas.offsetTop; if (isDragging \u0026amp;\u0026amp; currentDraggingImg) { currentDraggingImg.x = mouseX; currentDraggingImg.y = mouseY; redrawCanvas(); } else if (isDrawing) { const currentPath = drawingObjects[drawingObjects.length - 1]; currentPath.points.push({x: mouseX, y: mouseY}); redrawCanvas(); } } function onMouseUp() { if (isDragging \u0026amp;\u0026amp; currentDraggingImg) { currentDraggingImg.isDragging = false; } isDrawing = isDragging = false; } In this section, explain how to initialize the canvas, set up the context, and handle different mouse events for drawing lines on the canvas.\nAdding Image Loading and Erasing Features Next, we add the functionality to load images onto the canvas and erase the contents of the canvas.\nfunction loadImage(e) { var reader = new FileReader(); reader.onload = function(event) { var img = new Image(); img.onload = function() { imageObjects.push({ img: img, x: 0, y: 0, width: img.width, height: img.height, isDragging: false }); redrawCanvas(); }; img.src = event.target.result; }; reader.readAsDataURL(e.target.files[0]); }; function redrawCanvas() { ctx.clearRect(0, 0, canvas.width, canvas.height); // Draw all image objects imageObjects.forEach(imgObj =\u0026gt; { ctx.drawImage(imgObj.img, imgObj.x, imgObj.y); }); // Draw all drawing paths drawingObjects.forEach(path =\u0026gt; { ctx.beginPath(); ctx.strokeStyle = path.color; ctx.lineWidth = path.lineWidth; path.points.forEach((point, index) =\u0026gt; { if (index === 0) { ctx.moveTo(point.x, point.y); } else { ctx.lineTo(point.x, point.y); } }); ctx.stroke(); }); } function saveImage() { var image = canvas.toDataURL(\u0026#34;image/png\u0026#34;).replace(\u0026#34;image/png\u0026#34;, \u0026#34;image/octet-stream\u0026#34;); var link = document.createElement(\u0026#39;a\u0026#39;); link.download = \u0026#39;canvas-drawing.png\u0026#39;; link.href = image; link.click(); } document.getElementById(\u0026#39;eraseCanvas\u0026#39;).addEventListener(\u0026#39;click\u0026#39;, eraseCanvas); function eraseCanvas() { ctx.clearRect(0, 0, canvas.width, canvas.height); imageObjects = []; drawingObjects = []; } Describe how to use the File Reader API to load images and draw them on the canvas, as well as how to clear the canvas when the erase button is clicked.\nWrap up the tutorial by encouraging readers to experiment with the code and explore more features they can add. Mention that the tutorial provides a basic foundation for creating interactive canvas-based web applications.\nYou now have a working blackboard on Canvas. Fork it on CodePen and extend it — extra brushes, undo, or pressure sensitivity are natural next steps; the APIs are the same ones we have been using since this style of tutorial was current.","date":"2016-01-07","date_unix":1452178800,"id":"https://antoineboucher.info/CV/blog/posts/codepen-blackboard-canvas-tutorial/","permalink":"https://antoineboucher.info/CV/blog/posts/codepen-blackboard-canvas-tutorial/","post_kind":"tutorial","section":"posts","summary":"Interactive canvas blackboard with drawing, image import, and erase — CodePen demo, in the spirit of 2016-era Canvas tutorials.","tag_refs":[{"name":"Canvas","permalink":"https://antoineboucher.info/CV/blog/tags/canvas/"},{"name":"JavaScript","permalink":"https://antoineboucher.info/CV/blog/tags/javascript/"},{"name":"CodePen","permalink":"https://antoineboucher.info/CV/blog/tags/codepen/"},{"name":"Tutorial","permalink":"https://antoineboucher.info/CV/blog/tags/tutorial/"},{"name":"Frontend","permalink":"https://antoineboucher.info/CV/blog/tags/frontend/"}],"tags":["Canvas","JavaScript","CodePen","Tutorial","Frontend"],"tags_text":"Canvas JavaScript CodePen Tutorial Frontend","thumb":"/CV/blog/images/post-kind-tutorial.png","title":"HTML5 Canvas blackboard (CodePen tutorial)"},{"content":"Tutorial: Creating Draggable and Sortable Images with jQuery Introduction This walkthrough uses jQuery and jQuery UI to build circular “bubble” avatars you can drag and reorder — the same ingredients that showed up in countless demos when Messenger-style circles and sortable lists were everywhere (roughly the ES5 + jQuery era, ~2016). Handy for legacy pages or if you want the recipe in one place.\nPrerequisites Basic knowledge of HTML, CSS, and JavaScript jQuery and jQuery UI library HTML Structure We start by setting up our HTML structure with two unordered lists (ul) and list items (li) containing images.\n\u0026lt;div id=\u0026#34;draw\u0026#34;\u0026gt; \u0026lt;ul id=\u0026#34;ul1\u0026#34;\u0026gt; \u0026lt;li class=\u0026#34;li1\u0026#34;\u0026gt;\u0026lt;img id=\u0026#34;Logo\u0026#34; src=\u0026#34;your-image-source-1.jpg\u0026#34;\u0026gt;\u0026lt;/li\u0026gt; \u0026lt;li class=\u0026#34;li1\u0026#34;\u0026gt;\u0026lt;img id=\u0026#34;Logo2\u0026#34; src=\u0026#34;your-image-source-2.jpg\u0026#34;\u0026gt;\u0026lt;/li\u0026gt; \u0026lt;/ul\u0026gt; \u0026lt;ul id=\u0026#34;ul2\u0026#34;\u0026gt; \u0026lt;li class=\u0026#34;li2\u0026#34;\u0026gt;\u0026lt;img id=\u0026#34;Logo3\u0026#34; src=\u0026#34;your-image-source-3.jpg\u0026#34;\u0026gt;\u0026lt;/li\u0026gt; \u0026lt;li class=\u0026#34;li2\u0026#34;\u0026gt;\u0026lt;img id=\u0026#34;Logo4\u0026#34; src=\u0026#34;your-image-source-4.jpg\u0026#34;\u0026gt;\u0026lt;/li\u0026gt; \u0026lt;/ul\u0026gt; \u0026lt;/div\u0026gt; CSS Styling Next, we style the unordered lists and images. We remove the default list styling and set some basic styles for the images.\nul, li { list-style: none; } img { border-radius: 50%; border: 0.5px solid #888; width: 60px; height: 60px; margin: 0px; } jQuery Function We create a jQuery function to apply the rounded shape and make images either draggable or default based on the option passed.\n(function($) { $.fn.roundShape = function(option) { if (option === \u0026#34;default\u0026#34;) { this.css({ \u0026#34;border-radius\u0026#34;: \u0026#34;50%\u0026#34;, \u0026#34;border\u0026#34;: \u0026#34;0.5px solid #888\u0026#34;, \u0026#34;width\u0026#34;: \u0026#34;60px\u0026#34;, \u0026#34;height\u0026#34;: \u0026#34;60px\u0026#34;, \u0026#34;margin\u0026#34;: \u0026#34;0px\u0026#34; }); }; if (option === \u0026#34;draggable\u0026#34;) { this.css({ \u0026#34;border-radius\u0026#34;: \u0026#34;50%\u0026#34;, \u0026#34;border\u0026#34;: \u0026#34;0.5px solid #888\u0026#34;, \u0026#34;width\u0026#34;: \u0026#34;60px\u0026#34;, \u0026#34;height\u0026#34;: \u0026#34;60px\u0026#34;, \u0026#34;margin\u0026#34;: \u0026#34;0px\u0026#34; }).draggable({ scroll: true, scrollSensitivity: 100 }); } }; }(jQuery)); Applying the Function Finally, we apply the jQuery function to our images and enable sorting and dragging functionalities.\n$(document).ready(function() { $(\u0026#39;#ul1\u0026#39;).sortable({ revert: true }); $(\u0026#39;#Logo, #Logo2\u0026#39;).roundShape(\u0026#34;default\u0026#34;); $(\u0026#39;#Logo3, #Logo4\u0026#39;).roundShape(\u0026#34;draggable\u0026#34;); $(\u0026#39;#draw\u0026#39;).draggable({ axis: \u0026#34;x\u0026#34; }); $(\u0026#34;#ul1, .li1, .li2\u0026#34;).disableSelection(); }); Following these steps gives you draggable, sortable circular images on a page — jQuery UI all the way, but it still behaves the same in any browser where you include the libraries.","date":"2016-01-07","date_unix":1452175200,"id":"https://antoineboucher.info/CV/blog/posts/tutorial-jquery-messenger-image-bubbles/","permalink":"https://antoineboucher.info/CV/blog/posts/tutorial-jquery-messenger-image-bubbles/","post_kind":"tutorial","section":"posts","summary":"Messenger-style circular image tiles with jQuery UI — drag-and-drop and sortable lists, straight out of the jQuery-heavy tutorial blogs of ~2016.","tag_refs":[{"name":"JQuery","permalink":"https://antoineboucher.info/CV/blog/tags/jquery/"},{"name":"JQuery UI","permalink":"https://antoineboucher.info/CV/blog/tags/jquery-ui/"},{"name":"CSS","permalink":"https://antoineboucher.info/CV/blog/tags/css/"},{"name":"Tutorial","permalink":"https://antoineboucher.info/CV/blog/tags/tutorial/"},{"name":"Frontend","permalink":"https://antoineboucher.info/CV/blog/tags/frontend/"}],"tags":["jQuery","jQuery UI","CSS","Tutorial","Frontend"],"tags_text":"jQuery jQuery UI CSS Tutorial Frontend","thumb":"/CV/blog/images/post-kind-tutorial.png","title":"Draggable, sortable image bubbles (Messenger-style)"},{"content":"Dimension Repository · mathlib on docs.rs\nmathlib is a Rust crate for dense and sparse linear algebra, decompositions, 3D math, clustering, graph algorithms, transforms, and more—with WebAssembly demos and optional SIMD / GPU features. The Dimension repo wraps that crate alongside kinematics, physics, rendering experiments, and documentation.\nQuick start cd mathlib \u0026amp;\u0026amp; cargo build cd mathlib \u0026amp;\u0026amp; cargo test See the root README and docs/DOCS.md for architecture and examples.\nBlog: Dimension — a Rust math stack around mathlib","date":"2026-04-13","date_unix":1776081600,"id":"https://antoineboucher.info/CV/blog/projects/dimension/","permalink":"https://antoineboucher.info/CV/blog/projects/dimension/","post_kind":"","section":"projects","summary":"Rust monorepo centered on mathlib — linear algebra, sparse matrices, WASM, optional GPU.","tag_refs":[],"tags":[],"tags_text":"","thumb":"/CV/blog/images/post-kind-project.png","title":"Dimension (mathlib)"},{"content":"ESP32-7SEG Repository · MIT License\nFirmware for an ESP32 FireBeetle that drives an Adafruit 7-segment display (I2C backpack) and exposes a local web interface for timer modes and WiFi management.\nWeb interface The built-in server serves a Timer Control panel in the browser:\nStopwatch and Countdown modes Quick presets: 10 Min, 5 Min, 1 Min Manual minutes and seconds fields plus Start Reset WiFi to clear stored credentials when you need to re-provision the device Hardware ESP32 FireBeetle (v1.0) Adafruit 7-segment display with I2C backpack Breadboard and jumper wires (SCL, SDA, VCC, GND) Software PlatformIO project; see platformio.ini in the repo for board and library configuration. The repository also includes Android and firmware subprojects for a fuller stack around the same device. Quick start git clone https://github.com/antoinebou12/ESP32-7SEG.git cd ESP32-7SEG Open the project in PlatformIO (or Arduino IDE with equivalent libraries), build, and flash the ESP32. After boot, check the serial monitor for the device IP address, then open it in a browser on the same network to use the web UI.","date":"2026-04-13","date_unix":1776081600,"id":"https://antoineboucher.info/CV/blog/projects/esp32-7seg/","permalink":"https://antoineboucher.info/CV/blog/projects/esp32-7seg/","post_kind":"","section":"projects","summary":"ESP32 firmware and web UI to drive an Adafruit 7-segment display—stopwatch, countdown, and WiFi setup.","tag_refs":[],"tags":[],"tags_text":"","thumb":"https://antoineboucher.info/CV/blog/projects/esp32-7seg/featured_hu_a79c915d2b23520.png","title":"ESP32-7SEG"},{"content":"HDR-10bpp-Display-Test The HDR-10bpp-Display-Test is a simple yet effective way to verify the HDR 4K display capabilities on Linux systems, specifically testing the color depth of 10 bits per channel. This test is essential for anyone looking to ensure the highest quality display performance on their Linux environment.\nGetting Started HDR-10bpp-Display-Test: A test project for HDR 4K display on Linux.\nPrerequisites Before running the test, ensure your system has the following software installed:\nX server Python 3 GTK 3 ImageJ (for 10-bit color depth image display) ImageIO (for reading image files) On Ubuntu or Debian-based systems, install these packages using the command:\nsudo apt-get install xserver-xorg python3 python3-gi python3-gi-cairo gir1.2-gtk-3.0 imagej Installing ImageIO is a necessary component and can be installed via pip:\npip3 install imageio Running the Test To conduct the test, follow these steps:\nClone the repository:\ngit clone https://github.com/yourusername/HDR-10bpp-Display-Test.git Navigate to the project directory:\ncd HDR-10bpp-Display-Test Stop the display manager and X server:\nsudo systemctl stop lightdm || sudo systemctl stop gdm sudo pkill Xorg Start the X server with a color depth of 30:\nstartx -- -depth 30 Verify the X server\u0026rsquo;s color depth:\nxwininfo -root | grep Depth Launch the viewer application:\npython3 Viewer.py python3 Viewer3.py # for video support To display an image, use:\nimagej --no-splash /path/to/image Check if the image is displayed correctly with accurate colors.\nTroubleshooting If the X server fails to start with a color depth of 30, attempt to start it with a depth of 24 instead:\nstartx -- -depth 24 License This project is licensed under the MIT License - see the LICENSE.md file for details.\nWith this setup, the HDR-10bpp-Display-Test project aims to streamline the process of verifying and ensuring optimal display settings on Linux systems, particularly for those requiring high-fidelity visual outputs.","date":"2024-01-01","date_unix":1704110400,"id":"https://antoineboucher.info/CV/blog/projects/hdr-10bpp-display-test/","permalink":"https://antoineboucher.info/CV/blog/projects/hdr-10bpp-display-test/","post_kind":"","section":"projects","summary":"A test to verify HDR 4K display on Linux with 10-bit color depth.","tag_refs":[],"tags":[],"tags_text":"","thumb":"/CV/blog/images/post-kind-project.png","title":"HDR-10bpp-Display-Test"},{"content":"\nRepository · MIT License\nCompose-first homelab media stack: fetch content (torrents and Usenet), route it through *Arr apps, add subtitles where needed, then serve libraries with Plex or Jellyfin, with optional request and monitoring UIs. Configuration is meant to live under a shared ROOT tree on disk as in the repo’s docker-compose.yml.\nWhat’s in the stack Grouped roughly by role (see the compose file for images and volumes):\nDownload clients \u0026amp; indexers — Deluge, NZBGet, Jackett, NZBHydra2, Prowlarr Automation — Sonarr, Radarr, Lidarr, Bazarr, CouchPotato; Readarr (books); Whisparr; Tdarr (transcoding) Libraries \u0026amp; requests — Plex, Jellyfin, Ombi, Jellyseerr, Tautulli Extras — Stash (specialized library organizer) Ops — Netdata, Dashmachine, Filebrowser Quick start Install Docker and Compose on the host. Clone the repo and configure environment variables (see the repo .env and paths like ROOT). From the project directory: git clone https://github.com/antoinebou12/MediaBoxDockerCompose.git cd MediaBoxDockerCompose docker compose up -d If your setup still uses the older CLI, docker-compose up -d matches what the README describes.\nDocumentation Default ports, credentials, backup/restore notes, and contributing are maintained in the README so this page stays a short overview rather than a second copy of the manual.","date":"2024-01-01","date_unix":1704110400,"id":"https://antoineboucher.info/CV/blog/projects/mediaboxdockercompose/","permalink":"https://antoineboucher.info/CV/blog/projects/mediaboxdockercompose/","post_kind":"","section":"projects","summary":"Docker Compose stack for downloads, *Arr automation, Plex/Jellyfin, and ops dashboards—config lives in the repo.","tag_refs":[{"name":"Docker","permalink":"https://antoineboucher.info/CV/blog/tags/docker/"},{"name":"Docker Compose","permalink":"https://antoineboucher.info/CV/blog/tags/docker-compose/"},{"name":"Homelab","permalink":"https://antoineboucher.info/CV/blog/tags/homelab/"},{"name":"Plex","permalink":"https://antoineboucher.info/CV/blog/tags/plex/"},{"name":"Sonarr","permalink":"https://antoineboucher.info/CV/blog/tags/sonarr/"},{"name":"Radarr","permalink":"https://antoineboucher.info/CV/blog/tags/radarr/"}],"tags":["Docker","Docker Compose","Homelab","Plex","Sonarr","Radarr"],"tags_text":"Docker Docker Compose Homelab Plex Sonarr Radarr","thumb":"/CV/blog/images/post-kind-project.png","title":"Media box (Docker Compose)"},{"content":"RetroArch Web Games retroarch-web-games: Docker Retroarch Web with pre-downloaded games.\nThis repository offers a self-hosted RetroArch web player, allowing you to enjoy classic NES, SNES, Genesis, and Gameboy games right in your browser. Set up is a breeze with our Docker container.\nFeatures Pre-loaded Games: Enjoy a variety of games for NES, SNES, Genesis, and Gameboy. Self-hosted Web Player: Easily host the RetroArch player on your own server. Easy Deployment: Utilize Docker for straightforward setup and deployment. How to Use To get started, run the Docker image with the following command:\ndocker-compose up -d Download HERE https://hub.docker.com/r/antoinebou13/retroarch-web-games\nImage Size Warning Note: The Docker image is approximately 10GB due to the inclusion of various games.\nScript Explanation ### Behind the Scenes: Scripting for Game Downloads The RetroArch Web Games setup includes sophisticated scripting to streamline the download and organization of game files. Here\u0026#39;s a breakdown of how these scripts function: 1. **Batch Downloading of Game Archives:** The `download_7z_files` bash function and Python script work together to efficiently download game archives from various sources. 2. **Simplifying and Sorting Games:** Using both bash and Python, the scripts simplify game filenames and sort them into appropriate directories. This ensures easy navigation and selection within the RetroArch interface. 3. **Efficiency and Error Handling:** The use of parallel processing in bash and Python\u0026#39;s ThreadPoolExecutor maximizes efficiency. Additionally, error handling ensures stability and provides feedback in case of issues during the download process. ## Acknowledgements This Docker image is based on: - [Inglebard/dockerfiles (retroarch-web branch)](https://github.com/Inglebard/dockerfiles/tree/retroarch-web) - [libretro/RetroArch (master/pkg/emscripten)](https://github.com/libretro/RetroArch/tree/master/pkg/emscripten) ### License Licensed under the MIT License, ensuring open and unrestricted use to the community. I have put a lot of work into making this project both powerful and accessible. Whether you\u0026#39;re a seasoned gamer or just nostalgic for the classics. ### Source \u0026#34;https://archive.org/download/nointro.gb\u0026#34; \u0026#34;https://archive.org/download/nointro.gbc\u0026#34; \u0026#34;https://archive.org/download/nointro.gba\u0026#34; \u0026#34;https://archive.org/download/nointro.snes\u0026#34; \u0026#34;https://archive.org/download/nointro.md\u0026#34; \u0026#34;https://archive.org/download/nointro.nes-headered\u0026#34; ","date":"2024-01-01","date_unix":1704110400,"id":"https://antoineboucher.info/CV/blog/projects/retroarch-web-games/","permalink":"https://antoineboucher.info/CV/blog/projects/retroarch-web-games/","post_kind":"","section":"projects","summary":"A self-hosted RetroArch web player with a collection of classic games.","tag_refs":[],"tags":[],"tags_text":"","thumb":"/CV/blog/images/post-kind-project.png","title":"RetroArch Web Games"},{"content":"Portfolio Project This project is about creating a personal portfolio website using Hugo. You can view the project here.","date":"2023-12-30","date_unix":1703937600,"id":"https://antoineboucher.info/CV/blog/projects/porfolio/","permalink":"https://antoineboucher.info/CV/blog/projects/porfolio/","post_kind":"","section":"projects","summary":"A personal portfolio website built with Hugo.","tag_refs":[],"tags":[],"tags_text":"","thumb":"/CV/blog/images/post-kind-project.png","title":"Portfolio"},{"content":"D2COpenAIPlugin You can view the project here You can use the plugin here\nJoin the ChatGPT plugins waitlist here!\nD2COpenAIPlugin is a plugin for ChatGPT that enables users to generate diagrams using PlantUML or Mermaid. This plugin enhances the capabilities of ChatGPT by providing a seamless way to create diverse and creative diagrams.\nFeatures Generate diagrams using PlantUML or Mermaid Seamless integration with ChatGPT User-friendly interface for creating diagrams Enhances the versatility of ChatGPT Installation Before using the plugin, make sure to have the following prerequisites installed:\nPython 3.6+ FastAPI uvicorn Install Python 3.10, if not already installed. Clone the repository: git clone https://github.com/antoinebou12/D2COpenAIPlugin.git Navigate to the cloned repository directory: cd /path/to/D2COpenAIPlugin Install poetry: pip install poetry Create a new virtual environment with Python 3.10: poetry env use python3.10 Activate the virtual environment: poetry shell Install app dependencies: poetry install Create a bearer token Set the required environment variables: Setup To install the required packages for this plugin, run the following command:\npip install -r requirements-dev.txt To run the plugin, enter the following command:\npython app.py Once the local server is running:\nuvicorn app:app --host 127.0.0.1 --port 5003 Navigate to https://chat.openai.com. In the Model drop down, select \u0026ldquo;Plugins\u0026rdquo; (note, if you don\u0026rsquo;t see it there, you don\u0026rsquo;t have access yet). Select \u0026ldquo;Plugin store\u0026rdquo; Select \u0026ldquo;Develop your own plugin\u0026rdquo; Enter in localhost:5003 since this is the URL the server is running on locally, then select \u0026ldquo;Find manifest file\u0026rdquo;. The plugin should now be installed and enabled! You can start with a question like \u0026ldquo;What is on my todo list\u0026rdquo; and then try adding something to it as well!\nTesting in ChatGPT To test a locally hosted plugin in ChatGPT, follow these steps:\nRun the API on localhost: poetry run dev Follow the instructions in the Testing a Localhost Plugin in ChatGPT section of the README. For more detailed information on setting up, developing, and deploying the ChatGPT Retrieval Plugin, refer to the full Development section below.\nGetting help If you run into issues or have questions building a plugin, please join our Developer community forum.","date":"2021-09-06","date_unix":1630939343,"id":"https://antoineboucher.info/CV/blog/projects/d2copenaiplugin/","permalink":"https://antoineboucher.info/CV/blog/projects/d2copenaiplugin/","post_kind":"","section":"projects","summary":"A plugin for ChatGPT that enables users to generate diagrams using PlantUML or Mermaid.","tag_refs":[{"name":"ChatGPT","permalink":"https://antoineboucher.info/CV/blog/tags/chatgpt/"},{"name":"OpenAI","permalink":"https://antoineboucher.info/CV/blog/tags/openai/"},{"name":"Plugin","permalink":"https://antoineboucher.info/CV/blog/tags/plugin/"},{"name":"PlantUML","permalink":"https://antoineboucher.info/CV/blog/tags/plantuml/"},{"name":"Mermaid","permalink":"https://antoineboucher.info/CV/blog/tags/mermaid/"}],"tags":["ChatGPT","OpenAI","Plugin","PlantUML","Mermaid"],"tags_text":"ChatGPT OpenAI Plugin PlantUML Mermaid","thumb":"/CV/blog/images/post-kind-project.png","title":"D2COpenAIPlugin"},{"content":"DasherControl Another Interactive Configurable Dashboard with Customisable GridItem with IFrame and Bookmark and other cool features with basic Container Controller for Docker made with Vuejs and Rust (rocket).\nWhy \u0026hellip; Everything is a web app that can be installed with a docker in a container. I want to manage all my web applications on one dashboard like Sonarr and Jellyfin without opening like 10 tabs in chrome (rip my ram). When using services like Portainer or the Docker CLI, it\u0026rsquo;s long to set up a reverse proxy with SSL to secure your homelab. So I want to write widgets (Applets) that can do all my tasks that I do on the daily when managing my homelab. Also, I want to make a simple dashboard with widgets (vuejs component) like Windows Vista, but on the web and saved in a database.\nPreview Look Preview look 0.1.5 Preview look 0.1.2 Roadmap DasherControlv1\nFinished Applets with IFrame Save Workspace and switch between workspaces Applets Management Simple Start and Manage Docker Containers CI/CD User Auth Install App with Docker/Docker-Compose In Progress Customize Theme and Change Background Logging Canvas applets Terminal ssh web Tests TODO Documentation User Auth (OAUTH2 Github) Save docker-compose/container info in the database Caddy Config Generator for reverse Proxy and SSL Export and import of containers and workspaces Floating Windows Issues I use Iframe to display the other website some of the login of the website will not work because of the CSRF token or other restrictions of the iframe.\nInstall (Tested only on Ubuntu 20.04) // bash scripts/rust-setup-dev.sh // bash scripts/js-dev-setup.sh cd frontend \u0026amp;\u0026amp; npm install \u0026amp;\u0026amp; npm run build \u0026amp;\u0026amp; cd .. cargo install diesel_cli --no-default-features --features postgres // go in Rocket.toml and .env and change DATABASE_URL to your postgresql server diesel migration run // create admin user cargo run --bin create_admin // run web app cargo run Docker DOCKER_BUILDKIT=1 docker build -t antoinebou13/dashercontrol . Docker-compose DOCKER_BUILDKIT=1 docker-compose up -d --build \u0026amp;\u0026amp; docker-compose logs Run tests cargo test cd frontend \u0026amp;\u0026amp; npm test // no test yet on frontend ","date":"2021-09-06","date_unix":1630939343,"id":"https://antoineboucher.info/CV/blog/projects/dashcontrol/","permalink":"https://antoineboucher.info/CV/blog/projects/dashcontrol/","post_kind":"","section":"projects","summary":"Vue and Rust dashboard for dockerized homelab apps, workspaces, and applets.","tag_refs":[],"tags":[],"tags_text":"","thumb":"/CV/blog/images/post-kind-project.png","title":"DasherControl"},{"content":"FileClassifier FileClassifier is a Python-based command-line tool that automatically organizes files in a specified directory into predefined categories based on their file types. The tool supports multiple file formats, such as images, documents, videos, and more. It also comes with an extendable classifier that allows you to add topic modeling for better organization.\nFeatures Automatic file organization based on file types. Predefined categories for common file formats. Extendable classifier with topic modeling support. Customizable output directory structure. Lightweight and easy to use. Installation To install FileClassifier, simply clone the repository and install the required dependencies:\ngit clone https://github.com/antoinebou12/FileClassifier.git cd FileClassifier pip install poetry poetry install Usage To use FileClassifier, navigate to the project directory and run the main.py script with the required arguments:\npython main.py [OPTIONS] INPUT_DIRECTORY OUTPUT_DIRECTORY Arguments INPUT_DIRECTORY: The directory containing the files to be organized. OUTPUT_DIRECTORY: The directory where the organized files will be moved to. Options --version: Show the version and exit. --types: List supported file types and their categories. --topic-modeling: Enable topic modeling for better organization (requires additional setup). Example python main.py /path/to/input /path/to/output This command will organize the files in /path/to/input and move them to the appropriate folders in /path/to/output.\nExtending the Classifier To add topic modeling to the classifier, you need to modify the Classifier.py script and install additional dependencies. Please refer to the script comments and the provided documentation for more information on how to implement topic modeling.\nContributing Contributions are welcome! If you would like to contribute to FileClassifier, please submit a pull request or open an issue with your ideas and suggestions.\nLicense FileClassifier is released under the MIT License. See the LICENSE file for more information.","date":"2021-09-06","date_unix":1630939343,"id":"https://antoineboucher.info/CV/blog/projects/fileclassifier/","permalink":"https://antoineboucher.info/CV/blog/projects/fileclassifier/","post_kind":"","section":"projects","summary":"Python CLI that organizes files into categories by type, with optional topic modeling.","tag_refs":[],"tags":[],"tags_text":"","thumb":"/CV/blog/images/post-kind-project.png","title":"File Classifier"},{"content":"You can view the project here","date":"2021-09-06","date_unix":1630939343,"id":"https://antoineboucher.info/CV/blog/projects/movietraileranalyzer/","permalink":"https://antoineboucher.info/CV/blog/projects/movietraileranalyzer/","post_kind":"","section":"projects","summary":"Tools and workflows for analyzing movie trailers (see repository for details).","tag_refs":[],"tags":[],"tags_text":"","thumb":"/CV/blog/images/post-kind-project.png","title":"Movie Trailer Analyzer"},{"content":"PlantUMLApi Python interface with the PlantUML web. PlantUML is a library for generating UML diagrams from a simple text markup language.\nPyPlantUML is a simple remote client interface to a PlantUML server using the same custom encoding used by most other PlantUML clients.\nThis client defaults to the public PlantUML server but can be used against any server.\nInstallation To install, run the following command:\npip install plantumlapi pip install git+https://github.com/antoinebou12/plantumlapi Command Line Usage usage: plantuml.py [-h] [-o OUT] [-s SERVER] filename [filename ...] Generate images from PlantUML defined files using PlantUML server positional arguments: filename file(s) to generate images from optional arguments: -h, --help show this help message and exit -o OUT, --out OUT directory to put the files into -s SERVER, --server SERVER server to generate from; defaults to plantuml.com Usage from plantumlapi.plantumlapi import PlantUML # Create a PlantUML object, set the output directory and server p = PlantUML(url=\u0026#34;https://www.plantuml.com/plantuml/duml/SyfFKj2rKt3CoKnELR1Io4ZDoSa70000\u0026#34;) # Generate a diagram from a string p.process(\u0026#34;@startuml\\nclass Foo\\n@enduml\u0026#34;) Docker docker run -d -p 8080:8080 plantuml/plantuml-server:jetty from pyplantuml import PlantUML\r# Create a PlantUML object, set the output directory and server\rp = PlantUML(url=\u0026#34;http://localhost:8080/png\u0026#34;)\r# Generate a diagram from a string\rp.process(\u0026#34;@startuml\\nclass Foo\\n@enduml\u0026#34;) ","date":"2021-09-06","date_unix":1630939343,"id":"https://antoineboucher.info/CV/blog/projects/plantumlapi/","permalink":"https://antoineboucher.info/CV/blog/projects/plantumlapi/","post_kind":"","section":"projects","summary":"Python client for generating diagrams via a PlantUML server.","tag_refs":[],"tags":[],"tags_text":"","thumb":"/CV/blog/images/post-kind-project.png","title":"PlantUMLApi"},{"content":"here\nRawAnalyser RawAnalyser is a simple Python-based tool designed for analyzing raw images. It provides functionality for black and white clipping detection, gamma calculation, and flatness with vignette detection (Algolux).\nPrerequisites You need to have Python 2.7 installed and the following libraries:\njson time numpy scipy pyqt5 argparse argcomplete watchdog logging colorama plotly matplotlib Usage Run the RawAnalyser script: python RawAnalyser.py \u0026lt;pathfolder\u0026gt; \u0026lt;pointfile\u0026gt; --func \u0026lt;clipping,gamma,vignette,histogram,noHist,basic, default all\u0026gt; --bl \u0026lt;black level, default 1024\u0026gt; --vignette \u0026lt;pathvignette\u0026gt; Start the GUI: python GUI.py Check the value of the pixel and the color with the bayer pattern: python RawPixel.pt input\u0026lt;file\u0026gt; x y --bitdepth \u0026lt;8,10,16,default 16\u0026gt; Scripts Description RawAnalyser.py A script designed for checking black and white level of clipping in a raw image. You need to provide a path to the folder to be checked and a json file with the region of interest and the gray patches.\npoints.json The JSON file specifying the region of interest. An example of its structure is as follows:\n{ \u0026#34;last_y\u0026#34;: 798, \u0026#34;last_x\u0026#34;: 1353, \u0026#34;gray_first_y\u0026#34;: 691, \u0026#34;gray_first_x\u0026#34;: 606, \u0026#34;first_x\u0026#34;: 603, \u0026#34;first_y\u0026#34;: 315, \u0026#34;gray_last_x\u0026#34;: 1347, \u0026#34;gray_last_y\u0026#34;: 795 } GUI.py A GUI script for image selection. It allows you to select the region of interest and the gray patches in a raw or TIFF file, which are then saved in a JSON file. It also constantly checks for new files in the raw folder.\nRawPixel.py A script to inspect the value of a specific pixel and determine its color using the Bayer pattern.\n20MPto5MP.py A script for converting 20-megapixel images to 5-megapixel images, specifically designed for the new OV20880 sensor.\nCamera Example of camera sensor information:\nCamera: OV2740 Resolution: (1088, 1928) ImageType: BINARY16U Sensor value: 10 bit Bayer pattern: \u0026#39;bggr\u0026#39; This tool provides a simple and efficient way to analyze raw images, making it a vital part of any image processing workflow.","date":"2021-09-06","date_unix":1630939343,"id":"https://antoineboucher.info/CV/blog/projects/rawanalyser/","permalink":"https://antoineboucher.info/CV/blog/projects/rawanalyser/","post_kind":"","section":"projects","summary":"Python tool for raw image analysis: clipping, gamma, vignette, and Bayer pixel inspection.","tag_refs":[],"tags":[],"tags_text":"","thumb":"/CV/blog/images/post-kind-project.png","title":"RawAnalyser"},{"content":"Serilog.Sinks.SentrySDK A Serilog sink for Sentry that simplifies error and log management in your applications.\nBased on serilog-contrib/serilog-sinks-sentry\nProject Status Available Packages Package Nuget Serilog.Sinks.SentrySDK Package Link Serilog.Sinks.SentrySDK.AspNetCore Package Link Installation The library is available as a Nuget package.\nYou can install it with the following command:\ndotnet add package Serilog.Sinks.SentrySDK Install-Package Serilog.Sinks.SentrySDK Demos Demos demonstrating how to use this library can be found here.\nGetting Started Adding the Sentry Sink Add the Sentry sink to your Serilog logger configuration, so that the logs will be sent to your Sentry instance. The Sentry DSN must be provided.\nYou can also configure Serilog using a JSON configuration. Here\u0026rsquo;s a sample:\n{ \u0026#34;Logging\u0026#34;: { \u0026#34;IncludeScopes\u0026#34;: false, \u0026#34;LogLevel\u0026#34;: { \u0026#34;Default\u0026#34;: \u0026#34;Warning\u0026#34; } }, \u0026#34;Serilog\u0026#34;: { \u0026#34;Using\u0026#34;: [ \u0026#34;Serilog.Sinks.SentrySDK\u0026#34; ], \u0026#34;MinimumLevel\u0026#34;: \u0026#34;Debug\u0026#34;, \u0026#34;WriteTo\u0026#34;: [ { \u0026#34;Name\u0026#34;: \u0026#34;Sentry\u0026#34;, \u0026#34;Args\u0026#34;: { \u0026#34;dsn\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;sendDefaultPii\u0026#34;: true, \u0026#34;maxBreadcrumbs\u0026#34;: 200, \u0026#34;maxQueueItems\u0026#34;: 100, \u0026#34;debug\u0026#34;: true, \u0026#34;diagnosticLevel\u0026#34;: \u0026#34;Error\u0026#34;, \u0026#34;environment\u0026#34;: \u0026#34;Development\u0026#34;, \u0026#34;operationName\u0026#34;: \u0026#34;SentryConsole\u0026#34;, \u0026#34;release\u0026#34;: \u0026#34;1.0.5\u0026#34;, \u0026#34;serverName\u0026#34;: \u0026#34;SentryConsole\u0026#34;, \u0026#34;dist\u0026#34;: \u0026#34;SentryConsole\u0026#34;, \u0026#34;tags\u0026#34;: \u0026#34;SentryConsole=SentryConsole\u0026#34;, \u0026#34;tracesSampleRate\u0026#34;: 1.0, \u0026#34;tracesSampler\u0026#34;: \u0026#34;AlwaysSample\u0026#34;, \u0026#34;stackTraceMode\u0026#34;: \u0026#34;Enhanced\u0026#34;, \u0026#34;isGlobalModeEnabled\u0026#34;: true, \u0026#34;sampleRate\u0026#34;: 1.0, \u0026#34;attachStacktrace\u0026#34;: true, \u0026#34;autoSessionTracking\u0026#34;: true, \u0026#34;enableTracing\u0026#34;: true } } ], \u0026#34;Enrich\u0026#34;: [\u0026#34;FromLogContext\u0026#34;, \u0026#34;WithMachineName\u0026#34;, \u0026#34;WithThreadId\u0026#34;], \u0026#34;Properties\u0026#34;: { \u0026#34;Application\u0026#34;: \u0026#34;Sample\u0026#34; } } } var configuration = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile(\u0026#34;appsettings.json\u0026#34;) .Build(); var log = new LoggerConfiguration() .ReadFrom.Configuration(configuration) .Enrich.FromLogContext() .CreateLogger(); // By default, only messages with level errors and higher are captured log.Error(\u0026#34;This error goes to Sentry.\u0026#34;); Data Scrubbing Data scrubbing allows you to sanitize your logs before they are sent to Sentry. This can be useful for removing sensitive information.\nTo use it, provide a custom IScrubber implementation when setting up the Sentry Sink:\nvar log = new LoggerConfiguration() .WriteTo.Sentry(\u0026#34;Sentry DSN\u0026#34;, dataScrubber: new MyDataScrubber()) .Enrich.FromLogContext() .CreateLogger(); Capturing HttpContext (ASP.NET Core) To include user, request body, and header information in the logs, some additional setup is required.\nFirst, install the ASP.NET Core sink with the command:\ndotnet add package Serilog.Sinks.SentrySDK.AspNetCore Install-Package Serilog.Sinks.SentrySDK.AspNetCore Then, update your logger configuration to include a custom HttpContextDestructingPolicy:\nvar log = new LoggerConfiguration() .WriteTo.Sentry(\u0026#34;Sentry DSN\u0026#34;) .Enrich.FromLogContext() // Add this two lines to the logger configuration .Destructure.With\u0026lt;HttpContextDestructingPolicy\u0026gt;() .Filter.ByExcluding(e =\u0026gt; e.Exception?.CheckIfCaptured() == true) .CreateLogger(); Finally, add the Sentry context middleware to your Startup.cs:\npublic void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { // Add this line app.AddSentryContext(); // Other stuff } With these steps, your logs will include detailed information about the HTTP context of the requests.\nSentry SDK Properties BackgroundWorker: A property that gets or sets the worker used by the client to pass envelopes. SentryScopeStateProcessor: A property to get or set the Scope state processor. SendDefaultPii: A property to get or set whether to include default Personal Identifiable Information. NetworkStatusListener: A property to get or set a mechanism to convey network status to the caching transport. ServerName: A property to get or set the name of the server running the application. AttachStacktrace: A property to get or set whether to send the stack trace of an event captured without an exception. IsEnvironmentUser: A property to get or set whether to report the System.Environment.UserName as the User affected in the event. SampleRate: A property to get or set the optional sample rate. ShutdownTimeout: A property to get or set how long to wait for events to be sent before shutdown. MaxBreadcrumbs: A property to get or set the maximum breadcrumbs. MaxQueueItems: A property to get or set the maximum number of events to keep while the worker attempts to send them. BeforeBreadcrumb: A property to get or set a callback function to be invoked when a breadcrumb is about to be stored. BeforeSendTransaction: A property to get or set a callback to invoke before sending a transaction to Sentry. MaxCacheItems: A property to get or set the maximum number of events to keep in cache. Dsn: A property to get or set the Data Source Name of a given project in Sentry. Environment: A property to get or set the environment the application is running. Distribution: A property to get or set the distribution of the application, associated with the release set in SentryOptions.Release. Release: A property to get or set the release information for the application. BeforeSend: A property to get or set a callback to invoke before sending an event to Sentry. Methods AddJsonConverter(JsonConverter converter): A method to add a JsonConverter to be used when serializing or deserializing objects to JSON with the SDK. SetBeforeBreadcrumb(Func\u0026lt;Breadcrumb, Breadcrumb?\u0026gt; beforeBreadcrumb): A method to set a callback function to be invoked when a breadcrumb is about to be stored. SetBeforeBreadcrumb(Func\u0026lt;Breadcrumb, Hint, Breadcrumb?\u0026gt; beforeBreadcrumb): Another overload of SetBeforeBreadcrumb method that accepts a Hint. SetBeforeSend(Func\u0026lt;SentryEvent, SentryEvent?\u0026gt; beforeSend): A method to configure a callback function to be invoked before sending an event to Sentry. SetBeforeSend(Func\u0026lt;SentryEvent, Hint, SentryEvent?\u0026gt; beforeSend): Another overload of SetBeforeSend method that accepts a Hint. SetBeforeSendTransaction(Func\u0026lt;Transaction, Transaction?\u0026gt; beforeSendTransaction): A method to configure a callback to invoke before sending a transaction to Sentry. SetBeforeSendTransaction(Func\u0026lt;Transaction, Hint, Transaction?\u0026gt; beforeSendTransaction): Another overload of SetBeforeSendTransaction method that accepts a Hint. ","date":"2021-09-06","date_unix":1630939343,"id":"https://antoineboucher.info/CV/blog/projects/serilog.sinks.sentrysdk/","permalink":"https://antoineboucher.info/CV/blog/projects/serilog.sinks.sentrysdk/","post_kind":"","section":"projects","summary":"A Serilog sink that writes events to Sentry using Sentry SDK.","tag_refs":[{"name":"Serilog","permalink":"https://antoineboucher.info/CV/blog/tags/serilog/"},{"name":"Sentry","permalink":"https://antoineboucher.info/CV/blog/tags/sentry/"},{"name":"SentrySDK","permalink":"https://antoineboucher.info/CV/blog/tags/sentrysdk/"}],"tags":["Serilog","Sentry","SentrySDK"],"tags_text":"Serilog Sentry SentrySDK","thumb":"/CV/blog/images/post-kind-project.png","title":"Serilog.Sinks.SentrySDK"},{"content":"WordsUnveil WordsEnigma is a multilingual twist on the popular game Wordle, providing a fun and educational way to learn new languages. Built with advanced technologies for efficiency and a seamless gaming experience.\nGetting Started Development Setup WordsUnveil is a multilingual twist on the popular game Wordle, providing a fun and educational way to learn new languages. Built with advanced technologies for efficiency and a seamless gaming experience.\nGetting Started Development Setup\n# Navigate to the WordsUnveil directory cd WordsUnveil # Install dependencies yarn install # Copy default environment variables cp .env.default .env # Run database migrations yarn rw prisma migrate dev # (Optional) Seed the database yarn rw exec seed # Start the development server yarn rw dev Docker-Compose Setup\n# Start services with Docker docker-compose up -d PostgreSQL Database Setup\n# Create PostgreSQL container named `db` docker run --name=db -e POSTGRES_USER=admin -e POSTGRES_PASSWORD=admin -p \u0026#39;5432:5432\u0026#39; -d postgres Deployment Deploy the application to a bare metal server in production mode.\n# Initial deployment yarn rw deploy baremetal production --first-run ","date":"2021-09-06","date_unix":1630939343,"id":"https://antoineboucher.info/CV/blog/projects/wordunveil/","permalink":"https://antoineboucher.info/CV/blog/projects/wordunveil/","post_kind":"","section":"projects","summary":"Multilingual Wordle-style game built with RedwoodJS, GraphQL, and Prisma.","tag_refs":[],"tags":[],"tags_text":"","thumb":"/CV/blog/images/post-kind-project.png","title":"WordUnveil"},{"content":"Marketwatch API Python Library MarketWatch\nDocumentation\nA Python libary to interact with the MarketWatch Stock Market Game Based on code from\nhttps://github.com/kevindong/MarketWatch_API/ https://github.com/bwees/pymarketwatch Feature Logging in and out of the site Getting the current price of a stock Getting information about games on the site Buying, selling, shorting, and covering stocks in a game Creating, adding to, getting, and deleting watchlists Getting, adding to, and deleting items from a portfolio Getting and cancelling pending orders Checking if the game is down Installation pip install marketwatch pip install git+https://github.com/antoinebou12/marketwatch.git git clone https://github.com/antoinebou12/marketwatch.git Usage Here are some examples of how you can use the MarketWatch class:\nImport First, import the MarketWatch class from the script:\nfrom marketwatch import MarketWatch Login Then, create an instance of the MarketWatch class using your MarketWatch username and password:\nmarketwatch = MarketWatch(username, password) Get Stock Price To get the current price of a stock:\nmarketwatch.get_price(\u0026#34;AAPL\u0026#34;) Interact with Games https://www.marketwatch.com/games\nTo get information about games on the site:\nmarketwatch.get_games() Get Game marketwatch.get_game(\u0026#34;game-name\u0026#34;) Get Game Settings marketwatch.get_game_settings(\u0026#34;game-name\u0026#34;) Get Leaderboard marketwatch.get_leaderboard(\u0026#34;game-name\u0026#34;) Get Portfolio marketwatch.get_portfolio(\u0026#34;game-name\u0026#34;) Get Positions marketwatch.get_positions(\u0026#34;game-name\u0026#34;) Get Pending Orders marketwatch.get_pending_orders(\u0026#34;game-name\u0026#34;) Buy Stock marketwatch.buy(game_id, \u0026#34;AAPL\u0026#34;, 100) Sell Stock marketwatch.sell(\u0026#34;game-name\u0026#34;, \u0026#34;AAPL\u0026#34;, 100) Create Watchlist https://www.marketwatch.com/watchlist\nTo create a watchlist:\nmarketwatch.create_watchlist(\u0026#39;My Watchlist\u0026#39;) Add Stock to Watchlist To add stocks to a watchlist:\nmarketwatch.add_to_watchlist(watchlist_id, [\u0026#39;AAPL\u0026#39;, \u0026#39;GOOG\u0026#39;]) Get All Watchlists To get all watchlists:\nwatchlists = marketwatch.get_watchlists() Delete Watchlist To delete a watchlist:\nmarketwatch.delete_watchlist(watchlist_id) Example import os username = os.environ.get(\u0026#34;MARKETWATCH_USERNAME\u0026#34;) password = os.environ.get(\u0026#34;MARKETWATCH_PASSWORD\u0026#34;) marketwatch = MarketWatch(username, password) print(f\u0026#34;Price: {marketwatch.get_price(\u0026#39;AAPL\u0026#39;)} \\n\u0026#34;) print(f\u0026#34;Games: {marketwatch.get_games()} \\n\u0026#34;) games1 = marketwatch.get_games()[0][\u0026#34;name\u0026#34;].lower().replace(\u0026#34; \u0026#34;, \u0026#34;-\u0026#34;) print(f\u0026#34;Game: {marketwatch.get_game(games1)} \\n\u0026#34;) print(f\u0026#34;Game Settings: {marketwatch.get_game_settings(games1)} \\n\u0026#34;) print(f\u0026#34;Leaderboard: {marketwatch.get_leaderboard(games1)} \\n\u0026#34;) print(f\u0026#34;Porfolio: {marketwatch.get_portfolio(games1)} \\n\u0026#34;) print(f\u0026#34;Position: {marketwatch.get_positions(games1)}\u0026#34;) print(f\u0026#34;Orders Pending: {marketwatch.get_pending_orders(games1)}\u0026#34;) marketwatch.buy(games1, \u0026#34;AAPL\u0026#34;, 100) print(f\u0026#34;Position diff: {marketwatch.get_positions(games1)}\u0026#34;) Contributing Contributions are welcome. Please open an issue or submit a pull request.\nLicense This project is licensed under the MIT License.","date":"2021-09-06","date_unix":1630929600,"id":"https://antoineboucher.info/CV/blog/projects/marketwatch/","permalink":"https://antoineboucher.info/CV/blog/projects/marketwatch/","post_kind":"","section":"projects","summary":"Python library to interact with the MarketWatch stock market game.","tag_refs":[],"tags":[],"tags_text":"","thumb":"/CV/blog/images/post-kind-project.png","title":"MarketWatch API Python library"},{"content":"Egg-Stuffed Bella Mushrooms with Goat Cheese and Spinach Category Type: Vegetarian Main Ingredient: Mushrooms Preparation Time: 1 minutes Cooking Time: 5 minutes Total Time: 6 minutes Servings: 1 Ingredients 3 Large bella mushrooms (price: 1.99$) 2 Eggs (price: 0.50$) 100g Goat cheese (price: 1.99$) 1 Handful of spinach (price: 0.50$) Salt (price: 0.01$) Avocado oil (price: 0.01$) Prices Total price: 5.00$ Instructions Preheat your pan with avocado oil. Remove the stems from the mushrooms, place them cap-side down in the pan. Crack an egg into each mushroom cap. Season with salt, add spinach around the mushrooms. Cover and cook until eggs are set. Crumble goat cheese over the mushrooms and eggs. Serve hot with sliced avocado. Review ⭐⭐⭐⭐ - \u0026ldquo;Juicy and easy to make. This vegetarian dish is a delightful blend of textures and flavors. Perfect for a healthy, satisfying meal!\u0026rdquo;","date":"2024-01-05","date_unix":1704412800,"id":"https://antoineboucher.info/CV/blog/recipes/champignon/","permalink":"https://antoineboucher.info/CV/blog/recipes/champignon/","post_kind":"","section":"recipes","summary":"A delicious vegetarian dish that's easy to make.","tag_refs":[],"tags":[],"tags_text":"","thumb":"https://antoineboucher.info/CV/blog/recipes/champignon/featured_hu_9f3ed0632dcb76cd.png","title":"Egg-Stuffed Bella Mushrooms with Goat Cheese and Spinach"},{"content":"\nPhoto: Unsplash\nCategory Type: Vegetarian Main ingredient: Spinach \u0026amp; goat cheese Preparation time: ~25 minutes Cooking time: ~35 minutes Total time: ~60 minutes Servings: 6–8 Ingredients Pastry\n1 round shortcrust or all-butter puff pastry (~230 g), thawed if frozen\n(or enough homemade dough for a 23–25 cm tart tin) Filling\n400 g fresh spinach (or 250 g frozen, thawed and squeezed dry) 1 tbsp olive oil or butter 2 eggs 100 ml heavy cream (or crème fraîche) 150 g soft goat cheese (chèvre), crumbled 50 g walnuts, lightly toasted and chopped 1 small garlic clove, finely grated (optional) Salt, black pepper, pinch of nutmeg Finish\n2–3 tbsp runny honey A few extra walnut halves (optional) Instructions Preheat the oven to 190 °C (375 °F). Roll the pastry to line a 23–25 cm fluted tart tin; trim the edge and chill 15 minutes if you have time. Dock the base with a fork, line with parchment and baking beans, and blind-bake 12–15 minutes until the sides are set. Remove the beans, then bake about 5 minutes more until the base is lightly golden. Wilt the spinach in a large pan with the oil or butter over medium heat. Season lightly, then drain well and press out excess liquid. Chop roughly if the leaves are large. Whisk the eggs with the cream, salt, pepper, and nutmeg. Fold in the spinach, most of the goat cheese, and the chopped walnuts. Pour into the tart shell and scatter the remaining cheese on top. Bake 25–30 minutes until the filling is set and golden in spots. Cool slightly, then drizzle with honey and add extra walnuts if you like. Serve warm or at room temperature. Notes Honey is added after baking so it stays aromatic and does not scorch.\nReview ⭐⭐⭐⭐ — Balanced sweet-salty-nutty; good for brunch or a light dinner with a green salad.","date":"2024-01-05","date_unix":1704412800,"id":"https://antoineboucher.info/CV/blog/recipes/spinach-goat-cheese-walnut-tart/","permalink":"https://antoineboucher.info/CV/blog/recipes/spinach-goat-cheese-walnut-tart/","post_kind":"","section":"recipes","summary":"Savory tart with wilted spinach, chèvre, toasted walnuts, and a drizzle of honey—blind-baked shell and simple custard-style filling.","tag_refs":[{"name":"Vegetarian","permalink":"https://antoineboucher.info/CV/blog/tags/vegetarian/"},{"name":"Baking","permalink":"https://antoineboucher.info/CV/blog/tags/baking/"}],"tags":["Vegetarian","Baking"],"tags_text":"Vegetarian Baking","thumb":"https://antoineboucher.info/CV/blog/recipes/spinach-goat-cheese-walnut-tart/featured_hu_92a2e463d30dcb4c.jpg","title":"Spinach, goat cheese, honey and walnut tart"}]