<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Lakshya Thakur's blog]]></title><description><![CDATA[**Creatively Curious 👨‍💻 📸 🎶 🤔**Have tinkered with JavaScript majorly | I like to solve problems on Stack Overflow, write blog articles, create a side proj]]></description><link>https://blog.lakbychance.com</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 21:24:38 GMT</lastBuildDate><atom:link href="https://blog.lakbychance.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Scoring the Hat-Trick: Milestones at Hashnode]]></title><description><![CDATA[Introduction

Alright, alright, alright !!! I have officially completed three years at Hashnode now. If you have been following my blog for a while, I have this ritual of documenting what all went in the past year at my job.
My general process for do...]]></description><link>https://blog.lakbychance.com/scoring-the-hat-trick-milestones-at-hashnode</link><guid isPermaLink="true">https://blog.lakbychance.com/scoring-the-hat-trick-milestones-at-hashnode</guid><category><![CDATA[General Programming]]></category><category><![CDATA[software development]]></category><category><![CDATA[Hashnode]]></category><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Sun, 17 Nov 2024 05:55:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1731784146485/b11ecce1-2014-4112-8d0c-72a8581bd8d0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p><img src="https://i.makeagif.com/media/4-10-2017/FgJTBC.gif" alt class="image--center mx-auto" /></p>
<p><strong>Alright, alright, alright !!!</strong> I have officially completed <strong>three</strong> years at Hashnode now. If you have been following my blog for a while, I have this ritual of documenting what all went in the past year at my job.</p>
<p>My general process for documenting is :-</p>
<ul>
<li><p>Go through my commits across different projects and social media interactions.</p>
</li>
<li><p>Note down high level headings in Apple notes.</p>
</li>
<li><p>Frame an article out of what I can remember.</p>
</li>
</ul>
<p>You can catchup to all my previous experiences here :-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://blog.lakbychance.com/twofold-journey-leveling-up-at-hashnode">https://blog.lakbychance.com/twofold-journey-leveling-up-at-hashnode</a></div>
<p> </p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://blog.lakbychance.com/one-planet-later-at-hashnode">https://blog.lakbychance.com/one-planet-later-at-hashnode</a></div>
<p> </p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://blog.lakbychance.com/500-commits-later-at-hashnode">https://blog.lakbychance.com/500-commits-later-at-hashnode</a></div>
<p> </p>
<h2 id="heading-blog-dashboard-revamp">Blog dashboard revamp</h2>
<p>I think this is the largest bit of work that I undertook at Hashnode. Another one that comes close to this is the <strong>WYSIWYG</strong> editor but yes probably the longest I have worked on the project from start to finish and I am excited to share all the tidbits about it here.</p>
<p>Since I joined Hashnode, I have seen our design language evolve into something great overtime. We have worked on bringing <strong>consistency</strong> to the whole experience and we are still pushing for it everyday here. The main <a target="_blank" href="http://hashnode.com">site</a> has gone through good amount of design changes in my tenure. The major change started with our <a target="_blank" href="https://blog.lakbychance.com/twofold-journey-leveling-up-at-hashnode#heading-rhpc-reduce-home-page-clutter">feeds</a> and then there was an <a target="_blank" href="https://x.com/hashnode/status/1732364075009450328">editor design revamp</a>.</p>
<p>But we knew that our blog dashboard felt inconsistent with our new design language. There were so many times we wanted to priortize revamping it but also knew that it’s not a small job and required dedicated engineers to always work on it irrespective of other bugs or features that might be requested.</p>
<p>A high level overview for the work :-</p>
<ul>
<li><p>Creating <strong>23 UI screens</strong> from scratch for the whole dashboard in the new codebase.</p>
</li>
<li><p>Creating new <strong>GQL APIs</strong> to power this new dashboard</p>
</li>
</ul>
<p>That needs a good amount of <strong>PLANNING</strong>!!!</p>
<p><img src="https://media.tenor.com/PMjA85598MYAAAAM/whats-the-plan-leo-pap.gif" alt class="image--center mx-auto" /></p>
<h3 id="heading-the-plan">The plan</h3>
<p>Our usage of Discord forums have increased quite a lot over the last 2 years. For every major feature, we open up a forum and tag in the respective people involved in the decision making. This has worked good so far in practice and we also opened one just called <strong>Blog Dashboard</strong> for this project.</p>
<p>Our CTO <a class="user-mention" href="https://hashnode.com/@sandeep">Sandeep</a> shared some interesting insights on why this revamp is necessary for us right now based on conversations with users and the usage analytics. The team also felt aligned with the premise. Our designers already had the first iterations of few screens ready for us based on their convo with the founders.</p>
<p>We use <a target="_blank" href="https://linear.app/">Linear</a> for the project management purposes and man this might be the most <strong>PM</strong> style breakdown of stuff I might have done in my career till now. Our project was called <strong>Rebuilding blog dashboard</strong> and the first ticket I created there was to do a small POC to ascertain how to divide the whole project into a bunch of independent tasks that can be picked up.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731754212717/a25e2bfa-5e53-418c-9d65-aa771a49cf28.png" alt class="image--center mx-auto" /></p>
<p>This POC involved creating the the <strong>General</strong> page of blog dashboard without adhering to new design for now and integrating it with a temporary <strong>UpdatePublication</strong> mutation. Doing the POC was helpful in ascertaining the efforts required per screen and also grouping screens together which follow the same pattern.</p>
<p>This exercise resulted in creation of a big Notion doc with projected efforts and doubts which were resolved with help of <a class="user-mention" href="https://hashnode.com/@JannikWempe">Jannik Wempe</a> and <a class="user-mention" href="https://hashnode.com/@SandroVolpicella">Sandro Volpicella</a> for all API efforts. The clarifications further helped in creating individual tasks and further bifurcated subtasks for each screen in the blog dashboard. What this meant is that we had figured out close to <strong>60%</strong> unknowns and were ready to tackle the remaining <strong>40%</strong> as we progressed with the revamp.</p>
<p>We divided the screens in three sets and started with Set-1 of 6 screens. Also everything was featured flag from the start. Only hashnode team members could see the new blog dashboard. Our first set of feedback came from founders and the designers.</p>
<h3 id="heading-the-execution">The execution</h3>
<p>I got to work with <a class="user-mention" href="https://hashnode.com/@unclebigbay">Ayodele Samuel Adebayo</a> for the whole revamp of blog dashboard and it was a great collaborative experience. We closely worked on the UI revamp of things. I undertook some of the initial GQL mutations work until <a class="user-mention" href="https://hashnode.com/@vamsirao">Vamsi Rao</a> joined us to speed things up there.</p>
<p>During this work, a lot of components like <code>InputField</code>, <code>TextArea</code>, <code>Select</code>, <code>Breadcumb</code>, <code>Colorpicker</code> and much more were created for our in-house component library. Good amount of work went into translating the design in functioning components that can be reused throughout not just dashboard but also future screens (and it paid off!!).</p>
<p>We completed our first 6 screens of Set-1 before mid of January.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731754935021/0ef164cb-423f-47b2-89c3-1225e3a8ea3b.png" alt class="image--center mx-auto" /></p>
<p>As things were progressing, we realized that the whole revamp will be longer than anticipated and instead of a big-bang release, we will release it in phases on a <code>/beta</code> prefixed route until we can totally replace the existing dashboard. <a class="user-mention" href="https://hashnode.com/@iamshadmirza">Shad Mirza</a> also joined us to fast track the API efforts here. The feedback from real users is necessary to validate the new experience even if it’s just a set of screens that they can try. So we decided to ship <strong>13</strong> screens in the <code>Phase-1</code> of the release by Feb end to the end users and boy did we do this with two more days in hand 😎</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731755762421/2b135c8c-b2de-441c-9c3d-f075e9b1f83e.png" alt class="image--center mx-auto" /></p>
<p>Okay now I had to take some break off from the <strong>blog dashboard</strong> work to focus on <strong>Docs</strong>, specifically the new docs editor and build it along with <a class="user-mention" href="https://hashnode.com/@rajatkapoor">Rajat Kapoor</a> and <a class="user-mention" href="https://hashnode.com/@devcodesthings">Dev Shekhawat</a> . Post that I can jump back to the project again. So before leaving, had a good discussion with team on future milestones of <code>Phase-2</code>. I was only now involved in code reviews of <strong>blog dashboard</strong>.</p>
<p>I kept a close eye on the blog dashboard progress nevertheless. In <code>Phase-2</code>, the new screens were going on to prod directly to users post a design review and code review. But this also meant that until both reviews are done the thing can’t be shipped to prod. This started to look like a bottleneck to me because sometimes reviews are bound to take time and if hurried just for the sake of shipping to real users, we might compromise quality. So we took inspiration from <code>Phase-1</code> again and started shipping bits behind feature flag to prod directly. The design review can then happen on prod without the new users getting effected and code review can happen in multiple phases as per the bandwidth.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731756291020/bfc25848-e900-4957-beb8-618f8e118eb4.png" alt class="image--center mx-auto" /></p>
<p>And this worked well. We have been following this model now for big features at Hashnode.</p>
<p>So after giving almost <strong>2.5</strong> months to the <strong>Docs</strong> which would be post mid May, I jumped back to blog dashboard and by now we are in the end stage of <code>Phase-2</code> screens. I worked along with Sam on <strong>Domain</strong> screen for some final touches and also worked on revamping the <strong>Members</strong> page Role based Invite system based on user feedback.</p>
<p>We had now <strong>18</strong> screens done and <strong>5</strong> remaining. It was time to gear up for the</p>
<p><img src="https://www.creativefabrica.com/wp-content/uploads/2022/09/25/Final-Phase-Stamp-Graphics-39262102-4-580x363.jpg" alt class="image--center mx-auto" /></p>
<p>We had a good discussion around this again and decided to move old dashboard to a <code>/legacy</code> route and make the <code>/beta</code> one default. Since most of the screens were ready it felt like the right time to make them default for all users and go more aggressive on user feedback. The remaining 5 will get redirected to old dashboard until one by one they land up in the new one. And finally after 2 more months of work and multiple discussions, we finally shipped the remaining screens with the last one being <strong>GitHub</strong>. And I didn’t plan it but it took exactly <strong>9 months</strong> to get to this stage. It’s almost like a baby 👶.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731758863638/e9ef87ce-2333-42e3-ace3-a1b7ebd821d7.png" alt class="image--center mx-auto" /></p>
<p>Taking lead on this was a learning in itself. I realized that besides the technical bits of a long project, how effectively you <strong>organize and communicate</strong> the progress is equally important. We did more than <strong>50</strong> progress update messages in this forum over the course of 9 months. That averages out to at least <strong>1 update/week</strong> which kept us going and pushing forward more and more.</p>
<p>Btw, this is what very first push to prod looked like for <strong>Blog dashboard</strong> :-</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731759645611/d2370728-187d-4ceb-b553-d4bcaccf355c.png" alt class="image--center mx-auto" /></p>
<p>This is what it looks like now :-</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731759685702/88c7cb95-64b8-4091-8b75-8c6771824a3b.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-docs-editor">Docs editor</h2>
<p>Now when I took a break from implementation of <strong>blog dashboard,</strong> I was building stuff on the editor forefront of our new docs offering.</p>
<p>Btw, this is the <a target="_blank" href="https://blog.lakbychance.com/twofold-journey-leveling-up-at-hashnode#heading-undisclosed-poc">undisclosed POC</a> that I teased here in my last year end experience post. Though we redid all of the stuff from there in terms of UI and backend, I am glad this took shape in a released product for users.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731760137746/19891515-1403-46b0-ab3a-5ac322a9d18b.png" alt class="image--center mx-auto" /></p>
<p>When I was brought in we were just starting in with editor progress. My role was to figure out everything related to the left sidebar which involved :-</p>
<ul>
<li><p>A guide selector to navigate through diff guides + creating a new guide</p>
</li>
<li><p>Create, Update, delete and move (drag and drop) all sorts of documentation items (section, page or link).</p>
</li>
<li><p>Update existing Guide settings</p>
</li>
</ul>
<p>The most interesting bit out of this was the drag and drop experience. I went through different libraries here and experimented with them. We finally went with <a target="_blank" href="https://github.com/lukasbach/react-complex-tree">react-complex-tree</a>. It’s a nice accessible headless tree library needed to build experiences like so. But there was an issue with DND experience where the drag line didn’t appear at the expected position when dragging the items. The reason was that not all items within our UI were equally spaced and had padding or margins around them. The library needed fixed height of each item which I got to know as I dug up existing issues and discussions. So this was an essential problem to fix.</p>
<p>That’s when I started looking into the core code responsible for the logic and pair programmed with <strong>Claude</strong> <strong>3 sonnet</strong> for 2-3 days to figure out a solution where we can patch the library and make our requirement work. And we did it !!!.</p>
<p>I shared the <a target="_blank" href="https://github.com/lukasbach/react-complex-tree/discussions/66#discussioncomment-9080037">solution</a> on the OSS discussion as well if someone is interested. Though I think the latest versions of the library might have solved it (haven’t gone through that yet). We also wanted to choose an existing solution to not invest a lot of time in building it from scratch but I would now recommend to look at <a target="_blank" href="https://github.com/atlassian/pragmatic-drag-and-drop"><strong>Pragmatic drag and drop</strong></a> if you have good time to build a fully controllable solution. I am yet to actually dig in it but it comes from the creators of <strong>react-beautiful-dnd</strong> and is a successor to that.</p>
<p>So after building the left sidebar for docs editor, there were couple more stuff that I got involved in from bringing editor as close as to the design possible which involved refactoring some existing implementations. And also integrating the editor with the published view.</p>
<p>I also started getting involved again in blogs dashboard implementation work again after 2.5 months of docs work.</p>
<p>Now I was context switching between blogs and docs since both had their launch priorties.</p>
<p><img src="https://gifdb.com/images/high/multitasking-working-hard-illustration-aydbjbbuqstizy2a.gif" alt class="image--center mx-auto" /></p>
<h2 id="heading-docs-client">Docs client</h2>
<p>This time the work was on the published view of your docs or what end users will actually see. This was primarily being handled by Rajat and Harsh at the moment. A lot of stuff over here was built in a quick POC fashion so it needed some good amount of refactoring to evolve into a product.</p>
<h3 id="heading-layouts-layouts-and-layouts">Layouts, layouts and layouts</h3>
<p>This might be the 100th time I am working on layout related UI stuff at Hashnode. My first stint was the feed layout and here we are again.</p>
<p>I worked on bringing the header and left sidebar closer to our intended design along with functionality. The sidebar was built from scratch using learnings from the docs editor.</p>
<h3 id="heading-preview-mode">Preview mode</h3>
<p>Then I worked on to support <code>preview</code> mode here. So just like your blogs, you want to preview how the thing will look when published, you also want to do same for your docs.</p>
<p>This was done in collaboration with <a class="user-mention" href="https://hashnode.com/@FlorianFuchs123">Florian Fuchs</a>. Fun fact, I also worked on <a target="_blank" href="https://blog.lakbychance.com/one-planet-later-at-hashnode#heading-migrating-draft-preview-to-blogs">supporting the preview mode</a> on blogs during my first year at Hashnode.</p>
<p><img src="https://media.tenor.com/eB0jy4o2WjgAAAAe/hulk-smash.png" alt class="image--center mx-auto" /></p>
<p>A good amount of reusable code was written when supporting this. Building on top of it has made things predictable now.</p>
<h3 id="heading-performance-improvements">Performance improvements</h3>
<p>We wanted to improve the first interaction experience for our users when they click on a documentation page. To do this we tapped into browser <code>prefetching</code> . I got to learn about <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/Speculation_Rules_API">Speculation Rules API</a> while working on it. We built a hook that relies on speculation rules if a browser supports it or fallbacks to <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/rel/prefetch">&lt;link rel=”prefetch”&gt;</a>.</p>
<p>These are the top three that come into my mind when writing this article but a lot of work went on the whole docs product from editor to dashboard to the published client view with close collaboration between different team members. I think we also benefitted a lot from using the same patterns &amp; components that were created during our blog dashboard work to fast-track development of docs features.</p>
<h2 id="heading-onboarding-revamp">Onboarding revamp</h2>
<p>Just before launching our docs offering, we also wanted to simplify the experience of creating a blog or documentation. For this, we discussed internally a lot and came up with the final decision of having a new home screen where users will see their existing blog and docs project as well as can create new ones from there. We moved the feed to <code>/community</code> route. This was a big move since most of our users are used to the feed when they open HN.</p>
<p>Simplification also meant that our onboarding experience doesn’t differentiate between a blogs and docs user. Our onboarding was fined tuned for blogs but the time was here to simplify it further and just have one screen to let a user quickly create their account and get started with their first blogs or docs project.</p>
<p>I remember working on Onboarding UI revamp when I joined Hashnode. This was the second time I was touching the bits around here.</p>
<p>Since this was a crucial entry point for new users, we implemented this in a featured flag way so that existing users still see the old onboarding until we were ready to ship both new home page and onboarding together.</p>
<p>Here is how the new onboarding screen looks like :-</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731775396924/e2525755-e0c2-41bb-9664-92a9b671d409.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-mermaid-support-for-docs-and-blogs">Mermaid support for Docs and Blogs</h2>
<p>Well the good news is that both our offerings now support mermaid diagrams 🎉.</p>
<p>Actually the first efforts here dated back to our Internal Hackathon last year where I tried to support it for blogs but wasn’t happy with the solution that I came up during the hackathon week. So I discarded it.</p>
<p>This year Sandeep reached out to me if we can do it for Docs. Now Docs has a <strong>mdx</strong> renderer pipeline which makes creation of custom react components easy and with initial POC efforts the last year + some more digging to make it work with mdx, I was able to validate it that it will work here. So we first shipped Mermaid support for our Docs. Here is how it looks in practice :-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://x.com/Lakbychance/status/1841370678341120297">https://x.com/Lakbychance/status/1841370678341120297</a></div>
<p> </p>
<p>Then few days later, I was reached out again to see if we can make it work for Blogs. I knew I wasn’t happy with the old solution there but having worked on Docs, I got a new idea of tackling this problem and that solution worked well. The editor abstractions are same in both docs and blogs. Only efforts that went for blogs were regarding the render flow in published view.</p>
<p>It’s funny how a product (docs) that was created after blogs got a feature first and some inspiration from work over there helped us bring the same feature to the blogs product.</p>
<h2 id="heading-quality-of-life-improvements">Quality of life improvements</h2>
<h3 id="heading-graphql-eslint-plugin-to-catch-caching-issues">Graphql Eslint plugin to catch caching issues</h3>
<p>As we got into using GraphQL queries and mutations to build features at Hashnode, we ran into a bunch of bugs. The main problem was that the <code>id</code> was missing in the queries and mutation responses. This becomes a real headache when you do a mutation and expect the next query to give you updated results.</p>
<p>Telling developers to always remember to grab the <code>id</code> field wasn't a long-term fix, especially with our fast-paced development and how easy it is to miss things.</p>
<p>So I worked on implementing a eslint plugin which can solve this problem for us and wrote my second engineering article <a target="_blank" href="https://engineering.hashnode.com/how-we-detected-gql-caching-issues-with-an-eslint-plugin">here</a>.</p>
<h3 id="heading-deploying-in-house-avatar-generation-service">Deploying In-house Avatar generation service</h3>
<p>As part of new onboarding, we also shipped in-house avatar generation service. Earlier we relied on <strong>ui-avatars.com</strong> to generate avatar for each user but wanted to use our own font for the new ones to match with our design. <a class="user-mention" href="https://hashnode.com/@fazlerocks">Fazle Rahman</a> worked on this avatar service and I got looped in to bring it to deployable state along with UI integration. Locally, the service needed a <strong>puppeteer</strong> instance to convert <strong>svg to png</strong> which wasn’t going to work in a serverless world where the service had to be deployed without some extra deployment work. So I looked for alternatives there and came across <strong>sharp</strong> first which worked in practice but didn’t allow use of custom fonts which defeated the purpose. Ultimately , <a target="_blank" href="https://github.com/thx/resvg-js"><strong>resvg-js</strong></a> lib came to the rescue to solve this problem in serverless environment.</p>
<h3 id="heading-feature-flagging-designcode-review-optimization">Feature flagging + design/code review optimization</h3>
<p>I think this was one of the major learning that has worked well for stuff you see being created at Hashnode for more than a year now.</p>
<p>I personally advocate a lot for feature flagging (FF) stuff and getting it to production fast using this approach. This doesn’t mean that we will just ship bare bone to production on a FF. But it means that after the developer themselves have thoroughly tested the feature in their local dev along with ensuring quality metrics, it’s time to get it to production for internal team members.</p>
<p>I have already spoken about it’s advantage in the blog dashboard section but to reiterate:-</p>
<ul>
<li><p>This helps in validating design with production data.</p>
</li>
<li><p>This enables the code reviewer to take their time to review the code properly.</p>
</li>
</ul>
<p>The quality of product that you can ship to the end user with this approach will be higher because of the confidence you have with how it looked in production when the internal team tested it. Again, our scale of team and product is different so experiment with what works best for your team and product.</p>
<h2 id="heading-hacking-a-side-project-that-converted-articles-to-songs">Hacking a side project that converted articles to songs</h2>
<p>I was able to hack a side project called <strong>Hasunode</strong> over one of the weekends this year which would convert a Hashnode article to a song 🎶. And it was well received by the users. In fact, it actually won the <a target="_blank" href="https://peerlist.io/lakbychance/project/hasunode">first place</a> in one of the <strong>Peerlist</strong> project weeks ✨.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://x.com/Lakbychance/status/1784797635104391283">https://x.com/Lakbychance/status/1784797635104391283</a></div>
<p> </p>
<h2 id="heading-hashnode-workation-in-bangalore">Hashnode workation in Bangalore</h2>
<p>Almost our whole team assembled in <strong>Bangalore, India</strong> to work together under the same roof for a week during the month of October. We also created a one week of roadmap items that we want to target only in this week. This included stuff other than our ongoing feature work. It could be any quality of life improvement, a small feature etc which you can wrap up in a week. This is when I actually shipped the <strong>Mermaid</strong> support on Blogs.</p>
<p>We also had some interesting conversations regarding our future roadmap and how things are looking for us right now.</p>
<p>We celebrated Jannik’s birthday together and had great food and drinks at different places. I also got to play <strong>Badminton</strong> with the German folks and had intense matches with them. Got to meet and have interesting talks with our ex-shippers at Hashnode - <a class="user-mention" href="https://hashnode.com/@evilhead">Girish Patil</a> and Rajat.</p>
<p>Also, I was going to complete my 3 years in the next week of the workation so the team surprised me with the early celebration with my favourite <strong>Lotus biscoff cheescake</strong> 😋</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731778796069/fcee3c87-5a97-41e3-90b9-19b78e99d6ae.jpeg" alt class="image--center mx-auto" /></p>
<p>We worked out of this Coworks in Bellandur for a week 👇</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731778920987/f4833e44-e18b-41dd-a149-149463e2d93c.jpeg" alt class="image--center mx-auto" /></p>
<p>Running into <strong>Ashish Vidyarthi</strong> at our Hotel was not something I had in my list but great to interact with him for those few seconds</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731779403902/81bd2cde-712c-4ba3-afe3-5f72d10ed9f1.jpeg" alt class="image--center mx-auto" /></p>
<p>Some of us went to Cubbon park in early morning of our last day. Jogged <strong>2km</strong> along with Jannik to be out of breath and ending up having a <strong>160 bpm</strong> heartbeat while him being at <strong>130 bpm</strong> and me questioning all of my life choices. But it’s super fun to be competitive with him.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731779042668/270dc1a7-61cd-44b9-999f-074a8f93acd1.jpeg" alt class="image--center mx-auto" /></p>
<p>Also on the last day, enjoyed spending time with the team doing <strong>Go-karting, Laser tag and Bowling.</strong> Overall it was a great experience from stay to all goodies we got 😉 and obviously getting much more familiar with in person thoughts and culture of the team members.</p>
<h2 id="heading-whats-cooking-now">What’s cooking now ?</h2>
<p>Well as some of you might know that post our <a target="_blank" href="https://www.producthunt.com/products/hashnode-developers-community?comment=3920770#docs-by-hashnode">launch</a> of Docs by Hashnode, we have been actively listening to customer feedback and improving the offering. Since then a lot of things have been shipped already and one the major feature that cater to our <strong>Startup and Enterprise</strong> plan users is going to be released very soon. It’s bringing <strong>versioning support</strong> to the docs.</p>
<p>This again required some good amount of discussions and planning to execute to ensure everything works well for existing users while we build this in a iterative fashion.</p>
<p>In fact, we have been shipping it incrementally over feature flags and it’s almost there. I got to collaborate with <a class="user-mention" href="https://hashnode.com/@kdev65">Kieran Roberts</a> and Jannik for building this.</p>
<p>Excited to bring it live for all users !</p>
<p>Here is a sneak peak of the editor and published view :-</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731780255453/91b30905-340e-4b4d-842e-d08737c2c92e.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731780179775/551771f4-2606-4a33-ae57-ebe8fff67ea1.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-what-else">What else ?</h2>
<p>Well this is a long article already than what I had in mind. If I go on rambling about some interesting bug fixes and more things that I got to ship, it’s gonna take more words and probably bore out the readers. Also I have realized that I can actually share this kind of stuff in set of posts on X or Linkedin. Here is a list of misc interesting bits I shared over social media which touches aspects of my work at Hashnode :-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://x.com/Lakbychance/status/1846818222164602995">https://x.com/Lakbychance/status/1846818222164602995</a></div>
<p> </p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://x.com/Lakbychance/status/1845401589453897811">https://x.com/Lakbychance/status/1845401589453897811</a></div>
<p> </p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://x.com/Lakbychance/status/1844600912448237636">https://x.com/Lakbychance/status/1844600912448237636</a></div>
<p> </p>
<h2 id="heading-wrapping-up">Wrapping up</h2>
<p>Well it’s been another interesting year at Hashnode. Besides the technical bits, I got to interact with almost all team members to work on one or the other aspects of the products. This probably was the year of <strong>collaboration</strong> for me. Also leading one of the longest running projects of my career till now in a startup is something great to think about personally. We are collectively figuring out stuff at Hashnode on a daily basis but I am proud of all that we have done till now as a team 🚀.</p>
]]></content:encoded></item><item><title><![CDATA[Twofold Journey: Leveling Up At Hashnode]]></title><description><![CDATA[Introduction
Well here we are reflecting on my time at Hashnode the previous year. I have officially completed two years at Hashnode and as a part of ongoing ritual where I document the experience, here is another one.And also, in first quarter of 20...]]></description><link>https://blog.lakbychance.com/twofold-journey-leveling-up-at-hashnode</link><guid isPermaLink="true">https://blog.lakbychance.com/twofold-journey-leveling-up-at-hashnode</guid><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Mon, 08 Jan 2024 05:00:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1704632039734/7daf9b83-cca3-4953-b99c-325de960ed8b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Well here we are reflecting on my time at Hashnode the previous year. I have <strong>officially completed</strong> two years at Hashnode and as a part of ongoing ritual where I document the experience, here is another one.<br />And also, in first quarter of <strong>2023</strong>, I got promoted to a <strong>Senior Software Engineer</strong> so it was a great year from career prospects.</p>
<p>Here are the previous reads for those interested :-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://blog.lakbychance.com/one-planet-later-at-hashnode">https://blog.lakbychance.com/one-planet-later-at-hashnode</a></div>
<p> </p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://blog.lakbychance.com/500-commits-later-at-hashnode">https://blog.lakbychance.com/500-commits-later-at-hashnode</a></div>
<p> </p>
<p>Lets straight away dive into what all was I involved in the last year.</p>
<h2 id="heading-blog-ux-improvements">Blog UX improvements</h2>
<p>This began in the end of year 2022 around first week of <strong>December</strong>. If you have been using hashnode as your blog provider for a while, you might have noticed that the design an year ago vs how it looks right now is different. Our designers worked on an overhaul for the same.<br />I got to collaborate with <a class="user-mention" href="https://hashnode.com/@kieran6roberts">Kieran Roberts</a> , <a class="user-mention" href="https://hashnode.com/@evilhead">Girish Patil</a> and <a class="user-mention" href="https://hashnode.com/@vamsirao">Vamsi Rao</a> on these. We do things now in a much better way than we did for this one honestly (in terms of the DX). It was a set of huge PRs that was reviewed and merged in early weeks of 2023.<br />I specifically got to work on the <strong>comments sidebar</strong> and bits of the <strong>floating toolbar</strong> for each article. In the <strong>floating toolbar</strong>, the logic empowering how the click on the heart works and hits our backend API in an optimistic way was something I enjoyed building.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704622248499/4b966b9a-6034-4ee5-8870-5e92b025a2d6.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704622432034/dcc0a10c-9aa3-417b-8f8c-088dc40eeacc.gif" alt class="image--center mx-auto" /></p>
<p>Also there were not only UI facing changes but also significant database modeling and re-syncing changes. We almost got everything right. But the Friday deployment curse did catch up to us.</p>
<p><img src="https://javascriptgorilla.files.wordpress.com/2016/07/65093758.jpg" alt class="image--center mx-auto" /></p>
<p>Since there were a lot of changes that were done in a short period of time, the following weekend, I spent time on evaluating if everything is functioning smoothly. Thats when I encountered that we missed syncing one <strong>important</strong> field to our db model which resulted in a reset of a particular model field for all users. Luckily, we patched it the next day by running a script which took care of merging the new field values with the old one and restoring data to consistent state.</p>
<p><img src="https://media3.giphy.com/media/l0Iy69RBwtdmvwkIo/giphy.gif" alt class="image--center mx-auto" /></p>
<h2 id="heading-blog-performance-improvements">Blog performance improvements</h2>
<p>After the UX overhaul, I got to spend some time in assessing areas where we can improve performance aspects of our blog. This included lazy loading resources which do not need to be shown above the fold, primarily the <strong>images</strong> and <strong>embeds</strong> + dynamically loading components which are needed on interaction and some bits about optimizing LCP elements. We actually did a twitter space where around the <strong>18:50</strong> mark, we start sharing the updates for the <strong>performance.</strong></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/hashnode/status/1620754081206734849">https://twitter.com/hashnode/status/1620754081206734849</a></div>
<p> </p>
<h2 id="heading-undisclosed-poc">Undisclosed POC</h2>
<p>This was a new project from ground up. I collaborated with <a class="user-mention" href="https://hashnode.com/@sai_krrishhh">Sai Krishna Prasad Kandula</a> on this for almost 2.5 months. This included white boarding db models, flows, UI and then bringing them to life and keeping them accessible to our internal team for feedback. But whats the future for it ? Its something which is being now proactively evaluated by us and might even be in the upcoming roadmap 🤞.</p>
<h2 id="heading-hashnode-pro">Hashnode Pro</h2>
<p>I got to work on the <strong>Focus mode</strong> in our editor which is a part of our <strong>Hashnode Pro</strong> offering. Collaborated with <a class="user-mention" href="https://hashnode.com/@joshuanavi">Joshua 'Navi</a> on this one and possibly our first collab 👊. I became a fan of <strong>zustand</strong> when we worked on this since we introduced that while building this feature.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704623492891/7651aa4e-2d23-4270-be32-5e9a9fd0d737.png" alt class="image--center mx-auto" /></p>
<p>Also contributed in building the landing pages for <code>hashnode.com/ai</code> and <code>hashnode.com/pro</code>. The one section that took most time for me was the AI section of the <code>hashnode.com/pro</code> page . Enter <strong>framer motion</strong> 🤌.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704623641342/c4afeccd-1a0b-4d37-b768-ee7bcfbe76db.gif" alt class="image--center mx-auto" /></p>
<h2 id="heading-loginlogout-patches-for-hashnode-blogs">Login/logout patches for hashnode blogs</h2>
<p>I swear I needed <strong>cookies</strong> at this time to solve <strong>cookie</strong> related issues. Again got to collaborate with <a class="user-mention" href="https://hashnode.com/@evilhead">Girish Patil</a> on this one. This was solved in 2 phases and since then we haven't had any specific complains where a user login/logout on their blogs was not working as expected. The initial fix was strictly applied for me and Girish and later on for everyone.</p>
<p>NextJS middleware is honestly a powerful thing. I think after enough discussion and <strong>what ifs</strong> , we decided that middleware is the best place for these fixes to exist. We used the <code>req.cookies</code> API at the time to solve this. But their <strong>cookies</strong> API at that time was something which worked very differently than if you are used to the <code>`req.headers.set` </code> approach. Later on I think we ditched that API and just used <strong>headers</strong> API since it worked as expected and had lesser unknowns.</p>
<h2 id="heading-subscriber-modal">Subscriber Modal</h2>
<p>As part of our <strong>newsletter</strong> feature, we allow users of Hashnode to enable a <strong>subscriber modal</strong> to be shown at the end of each of their article's. This was primarily built by <a class="user-mention" href="https://hashnode.com/@kieran6roberts">Kieran Roberts</a>. There were two parts to this feature. Showing up the modal and taking action when the user interacts with the modal. It's the second part where I got to contribute along with Kieran. When the user lands up back on the article after creating their account on Hashnode, there were a set of checks that were needed to be done to let the user know about what exactly happened and what they can now expect. This was communicated via different <strong>toasts</strong> based on internals states.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704624350837/73884002-c559-49ab-a756-d3545792ae4d.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-meeting-the-folks-at-bangalore">Meeting the folks at Bangalore</h2>
<p>Got to meet the folks during mid of the year at <strong>Bangalore</strong> and work alongside them ✨. This is always a refreshing thing to do. We also did a nearby morning trek together <a target="_blank" href="https://emojipedia.org/mountain">⛰️</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704625035970/78c02d8e-8f16-46d3-8fa7-cb4d81ebe8cf.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-kicking-off-component-library">Kicking off component library</h2>
<p>This was primarily initiated and implemented by <a class="user-mention" href="https://hashnode.com/@rajatkapoor">Rajat Kapoor</a> and the designers - <a class="user-mention" href="https://hashnode.com/@UXpaul">Pritam Paul</a> and <a class="user-mention" href="https://hashnode.com/@joshuanavi">Joshua 'Navi</a> . I got to contribute on things like handling how the <strong>local development</strong> can work with this library without <strong>publishing</strong> this as a package. As of now, We have been following the local + publishing pre-patches flow for some while to do rapid development. We still have to make efforts to make it consumable throughout Hashnode. But I am glad efforts were started in this direction.</p>
<h2 id="heading-rhpc-reduce-home-page-clutter">RHPC - Reduce Home Page Clutter</h2>
<p>Yes quite a code word there. This was the first project that was done after kicking off the component library so most of the components got made during this project.</p>
<p>This is where we restructured our feed layouts from <strong>3 to 2 column</strong> and also redesigned the <strong>feed UX</strong> + <strong>right sidebar</strong> + <strong>notifications</strong> and <strong>profile menu</strong>.</p>
<p>However, the project's execution didn't go as smoothly we expected it to go. There were some <strong>user</strong> <strong>facing</strong> <strong>quality</strong> issues at the end (which we quickly patched) but nevertheless I specifically had two learnings/realizations post this :-</p>
<ul>
<li><p>Things were not <strong>streamlined</strong> enough for everyone to know where we can make that tradeoff of <strong>quality</strong> and <strong>speed</strong>.</p>
</li>
<li><p>Having more amount of engineers involved in the same project doesn't necessarily mean more output. Sometimes less is more and identifying these early on is better.</p>
</li>
</ul>
<p>We also had a <strong>Engineering Alignment</strong> call and also published a <strong>Product Coding Guidelines</strong> as part of our PR template to go through before raising a PR for review. It was something I made as notes for myself while working on different features and bugs at hashnode and have been wanting to make it accessible to our engineering members.</p>
<h2 id="heading-discussions">Discussions</h2>
<p>Taking all the learnings from the previous project, we got into developing <strong>Discussions</strong>. We wanted our users to explore what other users think about the articles on our feed and indulge in healthy discussions for the same. So we made it easier for the users to post the comments straight from <strong>hashnode.com</strong> via this feature.</p>
<p>This is my favorite piece of work in 2023. We were proactive in pushing things behind feature flags to prod from day one. I got to work on refactoring our feed cards to be able to handle new <strong>discussion</strong> related CTAs and links. Also got to work on new <strong>Commenting</strong> system UI implementation + integrating our new GQL APIs with it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704625428417/98f99334-5655-4a57-8433-ccaf813e7a73.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704625401677/f810230d-42ac-4126-999a-1b47a79b48c2.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-headless-hashnode">Headless Hashnode</h2>
<p><a target="_blank" href="https://hashnode.com/headless"><strong>Headless Hashnode</strong></a> is our newest offering from hashnode and a really <strong>powerful</strong> one. It allows you to totally own the frontend of your blog and start creating them on your own subpaths (for eg <code>/blog</code>).</p>
<p>I think most of my tinkering around here happened to improve the <strong>lighthouse</strong> scores for our <a target="_blank" href="https://github.com/Hashnode/starter-kit"><strong>starter-kit</strong></a> around <strong>performance, accessibility and SEO</strong>. This was purely pouring in the experience earned while working on the blogs offering and making sure that our starter kit also benefit from the same.</p>
<h2 id="heading-social-media-marketting">Social media marketting</h2>
<p>Okay thats a made up category for a <strong>Software engineer</strong> but hey I did create two videos to promote Hashnode features. The first one was for <strong>focus mode</strong></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/Lakbychance/status/1656901555818299392">https://twitter.com/Lakbychance/status/1656901555818299392</a></div>
<p> </p>
<p>The second one was for <strong>headless hashnode</strong> and the anime lovers (JJK fans) are going to love it .</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/Lakbychance/status/1714485323839815848">https://twitter.com/Lakbychance/status/1714485323839815848</a></div>
<p> </p>
<p>It feels nice to have this sense of <strong>freedom</strong> to be creative and promote the work done by your company in your own way.</p>
<h2 id="heading-hashnode-fitness-challenge">Hashnode Fitness challenge</h2>
<p>Well, as of writing this, I am not as fit as I was when I secured the <strong>3rd</strong> place in the <strong>September</strong> fitness challenge for Hashnode. I will start the routine soon. For me, this included running <strong>5kms</strong> every morning and I achieved probably my <strong>peak</strong> fitness of 2023. Oh how the winters make me super lazy (smartly blaming winters).</p>
<p><img src="https://media4.giphy.com/media/zDi4BCYVupi6xX9FIA/giphy.gif" alt class="image--center mx-auto" /></p>
<h2 id="heading-quick-wins">Quick wins</h2>
<p>So we also introduced a notion of <strong>quick wins</strong> and sharing them with others. I think as part of this I have few of them to highlight :-</p>
<ul>
<li><p>Solve users' problems directly via social media / GitHub - At Hashnode, we often reach out to users on Twitter or other platforms when they encounter issues and help them resolve their problems.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704628956775/4ba74722-4f13-4e31-ba0e-9ea568cd35d7.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Added a perf related improvement to cut the loading time of our <code>/users</code> api by <code>300ms</code> . Example of small change -&gt; big impact.</p>
</li>
<li><p>Our <strong>Vercel</strong> builds were constantly failing due to massive size of our assets. But solving that would have taken more engineering efforts. And the builds were running for like <strong>20 mins</strong> only to fail later which resulted like in like <strong>at least</strong> <strong>30 mins</strong> of daily time being wasted just to deploy our changes. So, I went through the official troubleshooting docs here and some GitHub issues and found that this was prominent if you had <strong>BUILD_CACHE</strong> being used (the default setting). So we simply turned it off and now our builds don't fail and take an average of <strong>10-15mins</strong>, a tradeoff, we were happy with.</p>
</li>
<li><p>Introducing <strong>Dark mode</strong> toggle straight from the editor. As much as it became easy for a user to switch themes, it also became easy for the devs to quickly test theme related changes in the editor. A win-win for both the parties.</p>
</li>
<li><p>We removed the <strong>Write</strong> button from our dashboard header some time back as it was not optimized to work for different blogs but the users gave us feedback that it should be there. So I got to work on re-introducing it + adding the capability to retain the context of the blog for which the button was clicked so that a user doesn't land up on the editor and needs to reselect the blog. The article is automatically created for them based on from which blog dashboard they clicked the button.</p>
</li>
</ul>
<h2 id="heading-hashnode-internal-hackathon">Hashnode Internal Hackathon</h2>
<p>We also had our first internal hackathon at Hashnode where all our team members went onto <strong>develop</strong> and <strong>ship</strong> really cool things. My favorites are the <strong>Webhooks</strong> and the <strong>Cmd+K</strong> features which were also winners of this hackathon.<br />I personally got to work on <strong>three</strong> things out of which <strong>two</strong> made to production.</p>
<ul>
<li>One of them was the <strong>EditorV2</strong> overhaul in terms of design and accessibility.</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/hashnode/status/1732364075009450328">https://twitter.com/hashnode/status/1732364075009450328</a></div>
<p> </p>
<ul>
<li>Another one was <strong>font-optimisation</strong> for blogs which I covered in this tweet</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/Lakbychance/status/1730546425098850329">https://twitter.com/Lakbychance/status/1730546425098850329</a></div>
<p> </p>
<h2 id="heading-weird-bugs">Weird bugs</h2>
<p>I have already shared some stuff in the following article which focuses on debugging issues at Hashnode. These were mostly dealt with in <strong>2022.</strong></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://blog.lakbychance.com/the-detective-side-of-being-a-software-developer">https://blog.lakbychance.com/the-detective-side-of-being-a-software-developer</a></div>
<p> </p>
<p>In <strong>2023,</strong> I have not yet compiled a list of what bug fixes I got to work on except one for which I wrote a twitter thread. This one was UI specific and needed us to a have a zoomed out perspective. I mean literally, I could only replicate it for zoom of <code>90%</code> on browsers and that helped us narrow down the cause.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/Lakbychance/status/1711400113589965236">https://twitter.com/Lakbychance/status/1711400113589965236</a></div>
<p> </p>
<p>There have been some recent ones which I will probably keep for future articles.</p>
<h2 id="heading-code-reviews">Code reviews</h2>
<p>I posted a tweet that compared how my GitHub activity looks over my tenure here</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/Lakbychance/status/1679888619820187648">https://twitter.com/Lakbychance/status/1679888619820187648</a></div>
<p> </p>
<p>One can see that the amount of time spent in reviews has increased for me. Its now balanced with what I work on. For this year, it spanned across features, bug fixes, migrations , refactors etc. Some highlights can be</p>
<ul>
<li><p><strong>Table of contents</strong> (done by <a class="user-mention" href="https://hashnode.com/@unclebigbay">Ayodele Samuel Adebayo</a> )</p>
</li>
<li><p><strong>Migrating from twin.macro to simple tailwind setup for blogs</strong> (done by <a class="user-mention" href="https://hashnode.com/@JannikWempe">Jannik Wempe</a> ) - My biggest code review yet. I took my good time for this.</p>
</li>
<li><p><strong>Discussions, Blog UX improvements</strong> and <strong>Security Patches</strong></p>
</li>
<li><p><strong>Accessibility improvements</strong> (done by <a class="user-mention" href="https://hashnode.com/@kieran6roberts">Kieran Roberts</a> )</p>
</li>
</ul>
<p>And just more miscellaneous stuff I can't remember as of now. And this is not just something I have to do as a reviewer. It's a shared responsibility of all our engineering team members. Over time, I feel as a team, we have become open to reviewing more code and building context around problems. This is necessary to keep unblocking people and ship stuff out there.</p>
<p>Also I wrote another article on how having a broader perspective during code reviews can help sometimes :-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://blog.lakbychance.com/code-reviews-a-zoomed-out-approach">https://blog.lakbychance.com/code-reviews-a-zoomed-out-approach</a></div>
<p> </p>
<h2 id="heading-whats-cooking">What's cooking ?</h2>
<p>Well I have been working on another <strong>migration</strong> now with an approach similar to how we built <strong>Discussions</strong> and besides the UI work, it also involves creating new <strong>GQL APIs</strong> which I have been wanting to do for some time. It's going well as of now and I hope our users will enjoy the <strong>revamped experience</strong> ✨.</p>
<p>Besides this, I am thankful for a wonderful year I had at <strong>Hashnode</strong> and being able to collaborate with amazing folks here 💯.</p>
]]></content:encoded></item><item><title><![CDATA[Performance Analysis : NextJS Script Component for Google Analytics]]></title><description><![CDATA[Introduction
One of the things when using the NextJS ecosystem that hasn't yet clicked for me is their Script component.
https://nextjs.org/docs/pages/building-your-application/optimizing/scripts
 
I have been running some experiments on how I can le...]]></description><link>https://blog.lakbychance.com/performance-analysis-nextjs-script-component-for-google-analytics</link><guid isPermaLink="true">https://blog.lakbychance.com/performance-analysis-nextjs-script-component-for-google-analytics</guid><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Wed, 01 Nov 2023 10:49:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1698835448644/fed27b7f-58b6-401d-9a65-f8322ae5f616.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>One of the things when using the <code>NextJS</code> ecosystem that hasn't yet clicked for me is their <code>Script</code> component.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://nextjs.org/docs/pages/building-your-application/optimizing/scripts">https://nextjs.org/docs/pages/building-your-application/optimizing/scripts</a></div>
<p> </p>
<p>I have been running some experiments on how I can leverage it for the <strong>Google Analytics</strong> script like <code>gtag.js</code>.</p>
<p><img src="https://media.tenor.com/K9yYRKhI6OIAAAAC/steins-gate.gif" alt="Anime Mad Scientist GIFs | Tenor" class="image--center mx-auto" /></p>
<h2 id="heading-experiment-the-setup">Experiment - The Setup</h2>
<p>There are two types of deployment involved here:-</p>
<ul>
<li><p><strong>Gtag-Standard</strong> <strong>-</strong> This one uses the <code>_document.tsx</code> file where we are loading the standard <code>script</code> tags with the <code>async</code> attribute as recommended by <a target="_blank" href="https://developers.google.com/tag-platform/gtagjs/install#add_the_google_tag_to_your_website"><strong>Google</strong></a><strong>.</strong></p>
<pre><code class="lang-typescript">  <span class="hljs-keyword">import</span> { Html, Head, Main, NextScript } <span class="hljs-keyword">from</span> <span class="hljs-string">"next/document"</span>;

  <span class="hljs-keyword">const</span> Document = <span class="hljs-function">() =&gt;</span> {
    <span class="hljs-keyword">return</span> (
      &lt;Html&gt;
        &lt;Head&gt;
          {<span class="hljs-comment">/* Global site tag (gtag.js) - Google Analytic */</span>}
          &lt;script
            <span class="hljs-keyword">async</span>
            src=<span class="hljs-string">"https://www.googletagmanager.com/gtag/js?id=G-2VSQZMZCY6"</span>
          &gt;&lt;/script&gt;
          &lt;script
            dangerouslySetInnerHTML={{
              __html: <span class="hljs-string">`window.dataLayer = window.dataLayer || []; function gtag()
                     {dataLayer.push(arguments)}
                     gtag('js', new Date()); gtag('config', 'G-2VSQZMZCY6');`</span>,
            }}
          &gt;&lt;/script&gt;
        &lt;/Head&gt;
        &lt;body&gt;
          &lt;Main /&gt;
          &lt;NextScript /&gt;
        &lt;/body&gt;
      &lt;/Html&gt;
    );
  };
  <span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> Document;
</code></pre>
</li>
<li><p><strong>Gtag-NextJS</strong> <strong>-</strong> This one uses the <code>index.tsx</code> file where we are loading the <code>Script</code> tags with the <code>afterInteractive</code> strategy.</p>
<pre><code class="lang-typescript">  <span class="hljs-keyword">import</span> Script <span class="hljs-keyword">from</span> <span class="hljs-string">"next/script"</span>;

  <span class="hljs-keyword">const</span> Home = <span class="hljs-function">() =&gt;</span> {
    <span class="hljs-keyword">return</span> (
      &lt;main className=<span class="hljs-string">"h-screen flex items-center justify-center"</span>&gt;
        &lt;h1&gt;Google Analytics Test&lt;/h1&gt;
        &lt;&gt;
          &lt;Script
            id=<span class="hljs-string">"google-analytics-init"</span>
            dangerouslySetInnerHTML={{
              __html: <span class="hljs-string">`
            window.dataLayer = window.dataLayer || []; function gtag()
            {dataLayer.push(arguments)}
            gtag('js', new Date()); gtag('config', 'G-2VSQZMZCY6');`</span>,
            }}
          /&gt;
          &lt;Script
            id=<span class="hljs-string">"google-analytics"</span>
            src={<span class="hljs-string">`https://www.googletagmanager.com/gtag/js?id=G-2VSQZMZCY6`</span>}
          /&gt;
        &lt;/&gt;
      &lt;/main&gt;
    );
  };

  <span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> Home;
</code></pre>
<p>  <strong>Note -</strong> Both of the deployments use the <code>pages</code> router setup.</p>
<p>  I referred to the following two resources on how to load the <code>gtag</code> script.</p>
<ul>
<li><p><a target="_blank" href="https://nextjs.org/docs/messages/next-script-for-ga#possible-ways-to-fix-it"><strong>NextJS Docs</strong></a></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698832524537/fca4fec4-150b-4cb4-b7ae-e58247def1ac.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>next/third-parties -</strong> <a target="_blank" href="https://github.com/vercel/next.js/blob/1caa58087a428666bb14d40dbc159b9a768b075d/packages/third-parties/src/google/gtm.tsx">Google Tag Manager</a></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698832605823/b6a4d252-55a1-4a40-87c3-f7dbcc7e6371.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
</ul>
<h2 id="heading-experiment-the-execution">Experiment - The Execution</h2>
<p>I ran both of the deployments through the <strong>webpagetest's</strong> <code>Visual comparison</code> test.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698831504846/e7b95bd9-ae10-4718-ac80-672a7c1a738b.png" alt class="image--center mx-auto" /></p>
<p>This was done <strong>10 times</strong> so that we get a good enough sample space.</p>
<h3 id="heading-first-run">First Run</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698831908344/960b5536-7e54-44a5-b93c-b293504c72fc.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-second-run">Second Run</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698831943457/bafe46b2-e489-48df-a340-44c8c4c933e1.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-third-run">Third Run</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698831955171/a3e6dd91-2cf4-43cf-8257-4031985e12d1.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-fourth-run">Fourth Run</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698831964737/ff384241-4d85-43ad-b4fb-788e3de4df18.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-fifth-run">Fifth Run</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698831985295/1ad2157d-60aa-42c8-a9b2-e38b43259af2.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-sixth-run">Sixth Run</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698831994392/725faad8-0430-4c81-9fda-cf17b22fd42c.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-seventh-run">Seventh Run</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698832003188/dafdceea-a885-4cc8-be8a-c197bebf8c15.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-eight-run">Eight Run</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698832011099/d812b276-5679-4ab4-ba2b-6ac503df33d2.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-ninth-run">Ninth Run</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698832029203/d7aa874c-c933-404d-b9fd-bb0403865724.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-tenth-run">Tenth Run</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698832050091/fbe2fd61-1480-448f-b3a9-c936ecf4a9e8.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-observations">Observations</h2>
<p>From all the above runs, there were two interesting bits that I want to <mark>highlight </mark> specifically:-</p>
<ul>
<li><p><strong>CPU Busy Time -</strong> This has been consistently lower for the <code>Gtag-Standard</code> deployment.</p>
</li>
<li><p><strong>Total Blocking Time (TBT) -</strong> This has been <strong>frequently</strong> lower for <code>Gtag-Standard</code> deployment but maybe with more runs this metric averages out to be the same for both the deployments.</p>
</li>
</ul>
<h2 id="heading-resources">Resources</h2>
<h3 id="heading-deployment-urls">Deployment Urls</h3>
<ul>
<li><p><strong>Gtag-NextJS -</strong> <a target="_blank" href="https://gtag-nextjs-script.vercel.app">https://gtag-nextjs-script.vercel.app</a></p>
</li>
<li><p><strong>Gtag-Standard -</strong> <a target="_blank" href="https://gtag-nextjs-script.vercel.app">https://gtag-standard-script.vercel.app</a></p>
</li>
</ul>
<h3 id="heading-webpagetest-runs">Webpagetest runs</h3>
<ul>
<li><p><a target="_blank" href="https://www.webpagetest.org/video/compare.php?tests=231101_BiDcS6_7AR,231101_BiDc6Q_7AV">https://www.webpagetest.org/video/compare.php?tests=231101_BiDcS6_7AR,231101_BiDc6Q_7AV</a></p>
</li>
<li><p><a target="_blank" href="https://www.webpagetest.org/video/compare.php?tests=231101_BiDcS6_7AR,231101_BiDc6Q_7AV">https://www.webpagetest.org/video/compare.php?tests=231101_AiDc17_7QY,231101_BiDc4K_7SX</a></p>
</li>
<li><p><a target="_blank" href="https://www.webpagetest.org/video/compare.php?tests=231101_BiDcS6_7AR,231101_BiDc6Q_7AV">https://www.webpagetest.org/video/compare.php?tests=231101_AiDcSM_7YZ,231101_BiDcMY_813</a></p>
</li>
<li><p><a target="_blank" href="https://www.webpagetest.org/video/compare.php?tests=231101_BiDcS6_7AR,231101_BiDc6Q_7AV">https://www.webpagetest.org/video/compare.php?tests=231101_BiDcP6_81F,231101_BiDcJ0_81G</a></p>
</li>
<li><p><a target="_blank" href="https://www.webpagetest.org/video/compare.php?tests=231101_BiDcS6_7AR,231101_BiDc6Q_7AV">https://www.webpagetest.org/video/compare.php?tests=231101_BiDcQ9_821,231101_BiDcNP_822</a></p>
</li>
<li><p><a target="_blank" href="https://www.webpagetest.org/video/compare.php?tests=231101_BiDcS6_7AR,231101_BiDc6Q_7AV">https://www.webpagetest.org/video/compare.php?tests=231101_AiDcTG_80M,231101_AiDcVS_80P</a></p>
</li>
<li><p><a target="_blank" href="https://www.webpagetest.org/video/compare.php?tests=231101_BiDcS6_7AR,231101_BiDc6Q_7AV">https://www.webpagetest.org/video/compare.php?tests=231101_BiDcXA_82N,231101_BiDcCV_82P</a></p>
</li>
<li><p><a target="_blank" href="https://www.webpagetest.org/video/compare.php?tests=231101_BiDcS6_7AR,231101_BiDc6Q_7AV">https://www.webpagetest.org/video/compare.php?tests=231101_BiDcH0_830,231101_AiDcPV_818</a></p>
</li>
<li><p><a target="_blank" href="https://www.webpagetest.org/video/compare.php?tests=231101_BiDcS6_7AR,231101_BiDc6Q_7AV">https://www.webpagetest.org/video/compare.php?tests=231101_AiDc2E_81E,231101_AiDcMT_81F</a></p>
</li>
<li><p><a target="_blank" href="https://www.webpagetest.org/video/compare.php?tests=231101_BiDcS6_7AR,231101_BiDc6Q_7AV">https://www.webpagetest.org/video/compare.php?tests=231101_AiDcHT_81S,231101_BiDc8G_83C</a></p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>The whole point of this experiment was to evaluate if using <strong>NextJS</strong> <strong>Script</strong> tags, we would get better <strong>performance</strong> scores when loading <strong>Google Analytics</strong> JavaScript. But that didn't happen. More or less they are on par with the standard ones. The standard script tag delays the <strong>load</strong> event and the <strong>NextJS Script</strong> tag starts after the <strong>load</strong> event. In either case, I believe, <a target="_blank" href="https://web.dev/articles/script-evaluation-and-long-tasks#what_is_script_evaluation"><strong>Script Evaluation</strong></a> (of <code>gtag.js</code>) contributes to the <strong>TBT</strong> metric. I thought maybe <strong>loading the gtag script</strong> after the browser <strong>load event</strong> might result in lower <strong>TBT</strong> but that's not how it works.  </p>
<p>Again, I am curious about the best way to load such <strong>analytics</strong> scripts. They are essential for users but contribute to <strong>poor</strong> scores for <strong>Mobile</strong> performance audits. Feel free to let me know if there are <strong>incorrect assumptions or gaps</strong> in how the above experiment was done.  </p>
<p>Thank you for your time :)</p>
]]></content:encoded></item><item><title><![CDATA[Code Reviews: A Zoomed Out Approach]]></title><description><![CDATA[Introduction
This article will touch upon scenarios that I have encountered while working at Hashnode where some code reviews helped us prevent future rework on the same issue by having a broader perspective.

Code Review 0
The Problem
Sometime back,...]]></description><link>https://blog.lakbychance.com/code-reviews-a-zoomed-out-approach</link><guid isPermaLink="true">https://blog.lakbychance.com/code-reviews-a-zoomed-out-approach</guid><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Thu, 05 Oct 2023 13:57:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1696513713515/732db821-b6d4-47f1-9c18-37b054df1088.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>This article will touch upon scenarios that I have encountered while working at Hashnode where some code reviews helped us prevent future rework on the same issue by having a broader perspective.</p>
<p><img src="https://y.yarn.co/327f1125-ab7a-4d34-aa52-de359c862b46_text.gif" alt="YARN | Uh-huh. Big words. | Succession (2018) - S01E07 Austerlitz | Video  clips by quotes | 327f1125 | 紗" class="image--center mx-auto" /></p>
<h2 id="heading-code-review-0">Code Review 0</h2>
<h3 id="heading-the-problem">The Problem</h3>
<p>Sometime back, one of our engineers, <a class="user-mention" href="https://hashnode.com/@kieran6roberts">Kieran Roberts</a> , stumbled upon a performance degradation when he was working on upgrading the navigation header behaviour for the blogs.</p>
<p><img src="https://i.pinimg.com/originals/a9/35/6f/a9356fbfe7c590080c655f72edd5fc41.gif" alt="Hmmm Thinking GIF - Hmmm Thinking Confused - Discover &amp; Share GIFs | Gif,  Cartoon memes, Cool gifs" class="image--center mx-auto" /></p>
<p>The issue wasn't strictly related to the header but the left sidebar. Whenever we closed the left sidebar on the blogs, the CPU spiked up to <code>100%</code>.</p>
<p>This behaviour happened only in <strong>Chrome</strong>. Now that spike also affected our new header UX because it relied on doing some fancy <code>listening to scroll events</code> and <code>style updates</code>. The header navigation became janky. So this was a blocker that we wanted to get rid of first before shipping the header update.</p>
<p><img src="https://media.tenor.com/geWqX74SFmIAAAAd/computer-cpu.gif" alt="Computer Cpu GIF - Computer Cpu Hot - Discover &amp; Share GIFs" class="image--center mx-auto" /></p>
<p>As far as I remember, I think he found a workaround that prevented the CPU spike but did require making a tradeoff at that time which I can't exactly remember. When the <strong>PR</strong> for the same came to me for code review, I started digging into the <code>GitHub</code> issues around this behaviour in <a target="_blank" href="https://www.radix-ui.com/primitives">radix</a> (a set of powerful UI primitives that we have been using lately to power a lot of UI on Hashnode) and ran into this <code>issue</code> thread:-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/radix-ui/primitives/issues/1605">https://github.com/radix-ui/primitives/issues/1605</a></div>
<p> </p>
<p>That is where I found out that the maintainer patched the behaviour a day ago that was making their <code>Dialog</code> radix-primitive to cause this CPU spike in Chrome and the patch was available in a <code>release candidate</code> for the same.</p>
<h3 id="heading-the-result">The Result</h3>
<p>We updated our package version to the <code>release candidate</code> after confirming the same with the maintainer on that same thread and it did stop the CPU spikes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696511046033/b41f94a8-101a-421e-bef2-e64299608dc2.png" alt class="image--center mx-auto" /></p>
<p>We were honestly lucky to observe this <code>issue</code> at the right moment. The patch from the OSS was shipped at the right time for us because we ran into the issue just 2 days before it. For the most part, what this meant as a learning experience for me is that it helps to drop by the OSS repositories of the libraries/frameworks being used in your work to check if the bug/regression exists at the implementation end of that OSS code and not where it's used in your project.</p>
<h2 id="heading-code-review-1">Code Review 1</h2>
<h3 id="heading-the-problem-1">The Problem</h3>
<p>This is much more recent, I think within the last month in fact, where a minor vulnerability was reported by a user.</p>
<p>I was again tasked with doing a code review for the PR which addressed a part of that vulnerability. Mostly, the PR dealt with addressing it more at <code>UI</code> end by modifying some bits. We were still unsure of where exactly at the backend <code>API</code>, one or the other check was missing which might have caused it and wanted to first ensure that at least from the UI end, it's not doable anymore. Post that, we would have investigated further.</p>
<p>As I was navigating through the PR, I thought just once to check the API implementation which existed in a separate repo. I found out that we already have the required checks in place and there was no reason that this vulnerability should exist. At that moment, I looked up when these checks landed in the code and it was again</p>
<p><img src="https://media.tenor.com/v1OXycx71xcAAAAC/yesterday-the-other-day.gif" alt="Yesterday GIFs | Tenor" class="image--center mx-auto" /></p>
<p>Talk about <strong>coincidences</strong>.</p>
<p>The fix happened at the API level as part of a refactoring of one of our <code>GraphQL</code> mutations by <a class="user-mention" href="https://hashnode.com/@Foxfl">Florian Fuchs</a> . He was not aware of this issue and we were not aware that this was fixed just a day ago. But again, <strong>lucky sighting.</strong> To be double sure, we looped him in and got confirmation that indeed this was fixed by the refactor.</p>
<h3 id="heading-the-result-1">The Result</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696511244070/bc8f82c7-5112-4c5a-af03-03323415130c.png" alt class="image--center mx-auto" /></p>
<p>The confirmation closed the loop there itself. The timing was critical here because when Kieran could replicate the vulnerability, he made a UI-level patch as per it but when I code-reviewed it, the patch had already been done at the API level so there was <strong>this gap</strong> which needed to be bridged. What worked here well was that during the code review, just out of curiosity, I checked the API repo and didn't restrict myself to the scope of the original review.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>It's weird that both these code reviews involved me and Kieran and had that one-day fix factor. I think these cases might be rare but since they have happened twice with me already, I thought of sharing these to maybe open up another perspective for other developers who are in the middle of making or reviewing that PR and can use it.</p>
<p><img src="https://media.tenor.com/18XSk_5ue-AAAAAM/eye-universe.gif" alt="Eyeball Explosion GIFs | Tenor" class="image--center mx-auto" /></p>
<p>Thank you for your time. Do you have such experiences with code reviews? Feel free to drop a comment and share them. You might just save another developer a good amount of time.</p>
]]></content:encoded></item><item><title><![CDATA[Crafting a Twitter like sticky sidebar]]></title><description><![CDATA[Introduction
You're scrolling through Twitter (I still like calling it that) or Medium and you get a nice view of the sidebar as it snaps to the bottom or top of the page if you try to scroll beyond its boundaries.

We will re-create the same experie...]]></description><link>https://blog.lakbychance.com/crafting-a-twitter-like-sticky-sidebar</link><guid isPermaLink="true">https://blog.lakbychance.com/crafting-a-twitter-like-sticky-sidebar</guid><category><![CDATA[Web Development]]></category><category><![CDATA[Frontend Development]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[React]]></category><category><![CDATA[General Programming]]></category><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Mon, 25 Sep 2023 03:45:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1695584114746/cb40ec13-9f8d-447f-ab03-f15a1164d12b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>You're scrolling through Twitter (I still like calling it that) or Medium and you get a nice view of the sidebar as it snaps to the bottom or top of the page if you try to scroll beyond its boundaries.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695577946532/5febb774-9b3c-4f6e-a68d-076dc311c5fb.gif" alt class="image--center mx-auto" /></p>
<p>We will re-create the same experience through a <code>react</code> implementation.</p>
<p>But before that, let's observe how the properties of these sidebar elements mutate as you scroll the viewport in existing implementations.</p>
<h2 id="heading-observations">Observations</h2>
<ul>
<li><p><strong>Twitter</strong> - On Twitter, as you scroll from top to bottom of the screen and the bottom of the sidebar comes into view, the sidebar's <code>position</code> becomes <code>sticky</code> with a mutated <code>top</code> value. There is also a <code>margin-top</code> on the sidebar element. Now if you scroll from bottom to top the sidebar remains <code>sticky</code> but now with a mutated <code>bottom</code> value. Again, the <code>margin-top</code> value changes in this case.</p>
</li>
<li><p><strong>Medium</strong> - On Medium, as you scroll from top to bottom of the screen and the bottom of the sidebar comes into view, the sidebar's <code>position</code> becomes <code>sticky</code> with a mutated <code>top</code> value. Now if you scroll from bottom to top, the sidebar's <code>position</code> is <code>relative</code> along with a mutated <code>top</code> value.</p>
</li>
</ul>
<p>I found that <code>Medium</code> was able to get the bottom stickiness working without explicitly setting a <code>bottom</code> value and no <code>margin-top</code> manipulations (Although <code>margin-top</code> does exist on the element, removing it didn't do anything). So only two properties, namely <code>top</code> and the <code>position</code> needed to be mutated if we went with this implementation.</p>
<h2 id="heading-considerations">Considerations</h2>
<p>With that in mind, I first thought of an <code>Intersection Observer</code> implementation where I will use two <code>pseudo</code> divs wrapping the actual content and observing their intersection with the viewport to make certain decisions on how to mutate the <strong>sidebar</strong> properties.</p>
<p>Here is how the HTML structure for this implementation looked like:-</p>
<pre><code class="lang-xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span> - Sidebar element
    <span class="hljs-tag">&lt;<span class="hljs-name">div</span>/&gt;</span>  - Top pseudo div
        {children} - sidebar content
    <span class="hljs-tag">&lt;<span class="hljs-name">div</span>/&gt;</span> Bottom pseudo div
<span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>
</code></pre>
<p>While making this work, I realized that it might be hard to not use stuff like <code>getBoundingClientRect</code> or <code>offsetHeight</code> properties to compute certain cases which <code>Intersection Observer (IO)</code> might not account for. I thought to avoid those for performance reasons as reading them may result in <a target="_blank" href="https://gist.github.com/paulirish/5d52fb081b3570c81e3a"><code>reflows</code></a> whereas reading values in<code>IO</code> doesn't cause that.</p>
<p>So I shifted to pure DOM calculations from the <code>IO</code> approach to keep it standardized. It might not be the most ideal approach but works well.</p>
<p>Here is how the HTML structure for this implementation looked like:-</p>
<pre><code class="lang-xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span> - Sidebar element
  {children}
<span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>
</code></pre>
<p>Again, the calculations need to happen inside a <code>scroll</code> listener because the logic depends on whether you're <strong>scrolling up</strong> or <strong>scrolling down</strong>.</p>
<h2 id="heading-implementation">Implementation</h2>
<p>The full-fledged implementation is available below but I will be sharing and explaining those bits which cover the conditions that govern when the sidebar is sticky and when it's not.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/lakbychance/playground/tree/main/packages/ui/Sticky">https://github.com/lakbychance/playground/tree/main/packages/ui/Sticky</a></div>
<p> </p>
<p>Let's observe some set of important conditions(inside a scroll handler) that give the sidebar that <strong>sticky</strong> behaviour</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">if</span> (isScrollingUp) {
  <span class="hljs-keyword">if</span> (isTopEndBetweenViewport) {
    setPosition(<span class="hljs-string">'sticky'</span>);
    setTop(topPositionAtWhichElementBecomesStickyFromTop);
  } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (isBottomEndBetweenViewport) {
    setPosition(<span class="hljs-string">'relative'</span>);
    setTop(topDistanceOfElementRelativeToPageTop - offset);
  }
} <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (isScrollingDown) {
  <span class="hljs-keyword">if</span> (isTopEndBetweenViewport) {
    setPosition(<span class="hljs-string">'relative'</span>);
    setTop(topDistanceOfElementRelativeToPageTop - offset);
  } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (isBottomEndBetweenViewport) {
    setPosition(<span class="hljs-string">'sticky'</span>);
    setTop(topDistanceAtWhichElementBecomesStickyFromBottom);
  }
}
</code></pre>
<p><strong>Explanation:-</strong></p>
<ul>
<li><p>When a user is <strong>scrolling up</strong>:-</p>
<ul>
<li><p>If the top of the sidebar is visible, then we set it's <code>position</code> as <code>sticky</code>. We also set it's <code>top</code> value to what we want it should start sticking from.</p>
</li>
<li><p>If the bottom of the sidebar is visible, then we set it's <code>position</code> to <code>relative</code>. We also set it's <code>top</code> value to what we know the element itself has offset by from the top of the page. So suppose you're at <code>1234px</code> from the top of the page then <code>top</code> will become <code>1234px</code>.</p>
</li>
</ul>
</li>
<li><p>When a user is <strong>scrolling down</strong>:-</p>
<ul>
<li><p>If the top of the sidebar is visible, then we set it's <code>position</code> as <code>relative</code>. We also set it's <code>top</code> value to what we know the element itself has offset from the top of the page. So suppose you're at <code>436px</code> from the top of the page then <code>top</code> will become <code>436px</code>.</p>
</li>
<li><p>If the bottom of the sidebar is visible, then we set it's <code>position</code> to <code>sticky</code>. We also set it's <code>top</code> value to what we want it should start sticking from the bottom. This will be mostly a <strong>negative</strong> value.</p>
</li>
</ul>
</li>
</ul>
<p>Now let's see more conditions within the scroll handler that do not depend on the <strong>scrolling direction</strong> but instead are required to exit early even before reaching the direction-based logic we discussed above:-</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">if</span> (isTopEndBelowViewport || isBottomEndAboveViewport) {
  setPosition(<span class="hljs-string">'relative'</span>);
  setTop(scrollYOffset);
  prevScrollTop = scrollYOffset;
  <span class="hljs-keyword">return</span>;
}
<span class="hljs-keyword">if</span> (areBothTopAndBottomEndsOnOppositeEndsOfViewport) {
  setPosition(<span class="hljs-string">'relative'</span>);
  setTop(topDistanceOfElementRelativeToPageTop - offset);
  prevScrollTop = scrollYOffset;
  <span class="hljs-keyword">return</span>;
}

<span class="hljs-keyword">if</span> (areBothTopAndBottomEndsBetweenViewport) {
  setPosition(<span class="hljs-string">'sticky'</span>);
  setTop(topPositionAtWhichElementBecomesStickyFromTop);
  prevScrollTop = scrollYOffset;
  <span class="hljs-keyword">return</span>;
}
</code></pre>
<p><strong>Explanation:-</strong></p>
<ul>
<li><p>If the top end of the sidebar is below the viewport or the bottom end of the sidebar is above the viewport, we will set it's <code>position</code> to <code>relative</code> and its <code>top</code> to <code>scrollYOffset</code> (alias for <code>window.scrollY</code>). But when can we run into such a situation? It's possible while navigating between pages that the sidebar unmounts and mounts or some race conditions caused it to not be at its correct position. More of an extra check that can help mitigate those.</p>
</li>
<li><p>If the top end of the sidebar is above the viewport and the bottom end of the sidebar is below the viewport, we set its <code>top</code> value to what we know the element itself has offset by from the top of the page.</p>
</li>
<li><p>If the top end of the sidebar and bottom end of the sidebar are both visible in the viewport, we set its <code>position</code> as <code>sticky</code>. We also set it's <code>top</code> value to what we want it should start sticking from. If the sidebar is too long, this becomes more apparent in the zoomed-out view.</p>
</li>
</ul>
<p><strong>Let's explore the variables used in the above code snippets:-</strong></p>
<ul>
<li><p><code>topPositionAtWhichElementBecomesStickyFromTop</code> - This is user-provided <code>top</code> value. By default this is <code>0</code>. You might want to change it to something else as per your needs. The sidebar calculations will factor in this property.</p>
</li>
<li><p><code>topDistanceOfElementRelativeToPageTop</code> - This is calculated using the <code>element.offsetTop</code> property. <code>offsetTop</code> is calculated relative to its nearest <code>positioned</code> parent, also called the <code>offsetParent</code>. If no such <code>positioned</code> parent is present, <code>body</code> is used for calculation.</p>
</li>
<li><p><code>offset</code> - This is also the <code>element.offsetTop</code> property but the one that was calculated initially outside the scroll handler. This is used to account for stuff like <code>padding</code>, <code>margin</code>, <code>position</code> etc. and is needed to factor in the upcoming scroll handler calculations.</p>
</li>
<li><p><code>topDistanceAtWhichElementBecomesStickyFromBottom</code> - This is the <code>element.getBoundingClientRect().top</code> value. <code>getBoundingClientRect()</code> works by giving values of the <strong>element's</strong> position relative to the viewport. So when the bottom of the sidebar is visible, the <code>getBoundingClientRect().top</code> will generally result in <code>negative</code> values for the top as it would be above the current viewport in most cases. So we can set that as the <code>top</code> value along with <code>position</code> as <code>sticky</code> to snap the sidebar to the bottom of the viewport.</p>
</li>
</ul>
<p>All of the above code logic is abstracted in a <code>useStickyElement</code> hook. This hook is then consumed by a <code>Sticky</code> component which looks like below:-</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> Sticky = React.forwardRef&lt;HTMLDivElement, ComponentProps&gt;(<span class="hljs-function">(<span class="hljs-params">props, ref</span>) =&gt;</span> {
  <span class="hljs-keyword">const</span> { children, top, <span class="hljs-keyword">as</span> = <span class="hljs-string">'div'</span>, ...restOfTheProps } = props;
  <span class="hljs-keyword">const</span> Element = <span class="hljs-keyword">as</span>;
  <span class="hljs-keyword">const</span> stickyContainerRef = useRef&lt;HTMLDivElement&gt;(<span class="hljs-literal">null</span>);
  <span class="hljs-keyword">const</span> mergedRefs = composeRefs(ref, stickyContainerRef);
  <span class="hljs-keyword">const</span> { top: _top, position } = useStickyElement({
    elRef: stickyContainerRef,
    top,
  });
  <span class="hljs-keyword">return</span> (
    &lt;Element
      ref={mergedRefs}
      style={{
        top: _top,
        position,
        height: <span class="hljs-string">'fit-content'</span>,
      }}
      {...restOfTheProps}
    &gt;
      {children}
    &lt;/Element&gt;
  );
});
</code></pre>
<p>In the above code, the <code>height: 'fit-content'</code> CSS property is important. This is what allows the element to not occupy more space than the content it has and the hook calculations to work. If it's not present, the sidebar's <code>height</code> will fall back to <code>auto</code> which will make it occupy the rest of the unused space.</p>
<h2 id="heading-limitations">Limitations</h2>
<p>The current implementation works great for cases where the elements above the sidebar won't resize or disappear. But if they do, the implementation cannot account for those because it relies on the initial <code>element.offsetTop</code>. There is a way to make that work by simply <code>umounting</code> and <code>mounting</code> the <code>Sticky</code> component to account for DOM changes and reinitialize the calculations. We can use different <code>keys</code> on the <code>Sticky</code> component in such cases that automatically ensures it.</p>
<h2 id="heading-playground">Playground</h2>
<p>Check out the <a target="_blank" href="https://github.com/lakbychance/playground/tree/main/packages/ui/Sticky">GitHub</a> repo where you can see the full implementation and also access an <a target="_blank" href="https://playground-7fhyqni90-lapstjup.vercel.app/">example</a> to play with the sidebar by dynamically setting the elements on the main content or the sidebar itself. I have intentionally skipped sharing the verbose bits of the <code>useStickyElement</code> hook in this article to focus on more conditional logic which makes it work.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695581890017/c61f26c7-88ce-440f-bc4e-9760205ab8a0.gif" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>The implementation doesn't try to be generic but is specific to the use case of getting that <strong>Sticky scrollable element</strong> behavior as we see in the sidebars.</p>
<p>For the readers of this article, I have some open-ended questions for you:-</p>
<ul>
<li><p>How would you have implemented it differently? Do you have a less JS and more CSS-based implementation in mind?</p>
</li>
<li><p>What optimizations can be done to the current implementation?</p>
</li>
<li><p>What libraries are doing this great? For eg; I played with <a target="_blank" href="https://github.com/codecks-io/react-sticky-box"><code>react-stickynode</code></a> and <a target="_blank" href="https://github.com/yahoo/react-stickynode"><code>react-sticky-box</code></a> before writing this implementation. They both worked fine but I noticed with dynamic loading content (infinite scrolling) or zooming in and out, there was some jumpy behavior. Although they are made with much more configurability in mind.</p>
</li>
</ul>
<p><img src="https://media.tenor.com/adh_cIhYhsoAAAAC/lets-get-into-a-discussion-about-this-discussion.gif" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Using GPT-4 Until I Hit The "Inevitable" Wall]]></title><description><![CDATA[In this article, I am going to be elaborating on how I used GPT-4 to write python code to detect Dysarthria speech and output the transcribed words. But basically, I couldn't get it to work due to a lack of domain knowledge.

Introduction
Back in 201...]]></description><link>https://blog.lakbychance.com/using-gpt-4-until-i-hit-the-inevitable-wall</link><guid isPermaLink="true">https://blog.lakbychance.com/using-gpt-4-until-i-hit-the-inevitable-wall</guid><category><![CDATA[openai]]></category><category><![CDATA[GPT 4]]></category><category><![CDATA[General Programming]]></category><category><![CDATA[Python]]></category><category><![CDATA[Machine Learning]]></category><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Mon, 03 Apr 2023 04:00:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1680469291933/2c6715be-d118-4721-8375-1059874a3902.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="hn-embed-widget" id="my-voice"></div><p> </p>
<blockquote>
<p>In this article, I am going to be elaborating on how I used GPT-4 to write python code to detect Dysarthria speech and output the transcribed words. But basically, I couldn't get it to work due to a lack of domain knowledge.</p>
</blockquote>
<h2 id="heading-introduction">Introduction</h2>
<p>Back in 2017-2018 during my college years, I worked on a project with my friend about <em>improving the intelligibility of dysarthric speech</em>. Let's define some of this jargon for you:-</p>
<ul>
<li><p>Dysarthria - It is a motor speech disorder that affects the ability of an individual to articulate words properly due to weakness or poor coordination of the muscles used for speech. It can be caused by various medical conditions, such as neurological disorders, brain injury, or degenerative diseases.</p>
</li>
<li><p>Intelligibility - Intelligibility refers to the degree to which speech can be understood by others. In the context of dysarthria, it refers to the degree to which individuals with dysarthria can convey their intended message through speech.</p>
</li>
</ul>
<p>So all in all, we wanted a means to improve the understanding of words spoken by dysarthric people. At that time, Data Science &amp; ML were very interesting things to me even though I lacked domain knowledge. We tried solving it through the means using some RNNs and stuff like that but I will be honest, we couldn't even get close. The reason was simple. We lacked domain knowledge. We didn't have the mentorship of a domain expertise person. All we did was google stuff, trying to play with hyperparameters around the data we gathered, cleaned and prepared for training.</p>
<p>So we abandoned the whole ML approach and couldn't proceed further on it.</p>
<p><img src="https://media.tenor.com/7a8kzob5mKoAAAAM/blocked-nope.gif" alt="A man in a suit and tie saying Blocked" class="image--center mx-auto" /></p>
<p>At that time, I was all into python code. I knew python the best as far as I can remember. This is not true anymore. I have been writing JS professionally now for almost 3.5 years.</p>
<p>But now we have LLMs filling in the lines of code for us in languages we might be less aware of.</p>
<p>With the per-day advancement that's been going on, I wanted to revisit the college project from a different angle this time. Instead of trying to improve intelligibility, I wanted to know if we can build a POC ASR (Automatic Speech Recognition) tool for dysarthric people to transcribe the words spoken.</p>
<p>I told this same problem statement to GPT-4 and it did spin up some python code for me. Again, I still lack the domain knowledge of the whole ML space but I still wanted to give this a try to see where I will hit the wall finally.</p>
<p><img src="https://y.yarn.co/fd17b3ce-28bc-40ee-b8eb-0293c25a272d_text.gif" alt="a group of people standing next to each other with one of the persons telling to not discard something until you try something" class="image--center mx-auto" /></p>
<p>In this article, I am going to utilize Hashnode's AI tools to summarise and explain the code that was written by GPT-4. It's only legit if an AI explains the work of another AI. Think of me as an instructor while the code and explanation are being done by the AI toolset.</p>
<h2 id="heading-the-dataset">The Dataset</h2>
<p>It goes without saying but data is of the utmost importance when it comes to trying to build a corpus to train on any model.</p>
<p>I have the data obtained from UASpeech Database. This consisted of audio recordings of basically two types of speakers:-</p>
<ul>
<li><p>Dysarthric: The speakers are categorized as F02, F03 (for female speakers), and M02, M03 (for male speakers), with each index representing a unique speaker.</p>
</li>
<li><p>Control: The speakers are categorized as CF02 and CF03 for females and CM02 and CM03 for males, respectively.</p>
</li>
</ul>
<p>So, to figure out what's going on with dysarthria, scientists have recorded a bunch of audio of people with it talking. But, to make sure they're getting accurate results, they need a control audio too.</p>
<p>This allows for a comparison between dysarthric speech and normal speech, helping scientists identify the specific areas of speech that are affected by dysarthria.</p>
<p>For each dysarthric audio, there is a corresponding control audio. For example, if a dysarthric individual speaks the word <strong>"Command"</strong> in a labelled file named <strong>F02_B1_C12_M2.wav</strong>, then there is a corresponding control audio named <strong>CF02_B1_C12_M2.wav</strong> with the same utterance of <strong>"Command"</strong>.</p>
<p>Hey, just a heads up - right now there aren't any written versions of these audio files. So basically, we can't see exactly what's being said. But we will get around it by generating the same via GPT-4's help.</p>
<h3 id="heading-cleaning-up-the-dysarthric-audio">Cleaning up the dysarthric audio</h3>
<p>Our focus is on solving the challenge of understanding dysarthric audio. The issue lies in the prolonged duration of individual words and the presence of empty audio gaps. For now, we are not concerned with dialect or pronunciation. Furthermore, background noise amplifies the difficulty of predicting the intended word. To address these concerns, we tasked GPT-4 to provide us with Python code that performs noise reduction and voice activity detection.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> librosa
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
<span class="hljs-keyword">import</span> webrtcvad
<span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> torch
<span class="hljs-keyword">import</span> torchaudio

<span class="hljs-keyword">import</span> noisereduce <span class="hljs-keyword">as</span> nr

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">noise_reduction</span>(<span class="hljs-params">waveform, sample_rate</span>):</span>
    y = np.array(waveform)
    y_reduced = nr.reduce_noise(y=y, sr=sample_rate)
    <span class="hljs-keyword">return</span> torch.Tensor(y_reduced)

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">voice_activity_detection</span>(<span class="hljs-params">waveform, sample_rate, vad_window=<span class="hljs-number">30</span></span>):</span>
    vad = webrtcvad.Vad(<span class="hljs-number">3</span>)
    frame_duration = vad_window / <span class="hljs-number">1000.0</span>
    frame_length = round(sample_rate * frame_duration)
    num_frames = len(waveform) // frame_length

    <span class="hljs-comment"># Convert waveform to int16 and scale it</span>
    waveform_int16 = (waveform.numpy() * <span class="hljs-number">32767</span>).astype(np.int16)

    active_frames = []
    <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> range(num_frames):
        frame = waveform_int16[i * frame_length : (i + <span class="hljs-number">1</span>) * frame_length]
        is_speech = vad.is_speech(frame.tobytes(), sample_rate)
        <span class="hljs-keyword">if</span> is_speech:
            active_frames.append(torch.tensor(frame, dtype=torch.float32) / <span class="hljs-number">32767</span>)

    <span class="hljs-keyword">if</span> len(active_frames) &gt; <span class="hljs-number">0</span>:
        <span class="hljs-keyword">return</span> torch.cat(active_frames)
    <span class="hljs-keyword">else</span>:
        print(<span class="hljs-string">"No speech frames detected."</span>)
        <span class="hljs-keyword">return</span> torch.tensor([], dtype=torch.float32)


<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">load_audio_librosa</span>(<span class="hljs-params">file_path, sample_rate=<span class="hljs-number">16000</span></span>):</span>
    waveform, _ = librosa.load(file_path, sr=sample_rate)
    <span class="hljs-keyword">return</span> torch.from_numpy(waveform), sample_rate


output_folder_vad = <span class="hljs-string">"/path/to/audio/files"</span>
os.makedirs(output_folder_vad, exist_ok=<span class="hljs-literal">True</span>)
<span class="hljs-comment"># load from Audio/F02</span>
input_audio_files = librosa.util.find_files(<span class="hljs-string">"/path/to/audio/files"</span>, ext=[<span class="hljs-string">"wav"</span>])

<span class="hljs-keyword">for</span> input_audio_file <span class="hljs-keyword">in</span> input_audio_files:
    <span class="hljs-keyword">try</span>:
        waveform, sample_rate = load_audio_librosa(input_audio_file)
    <span class="hljs-keyword">except</span>:
        print(<span class="hljs-string">"Error loading file: {}"</span>.format(input_audio_file))
        <span class="hljs-keyword">continue</span>

    <span class="hljs-comment"># Apply noise reduction</span>
    waveform_denoised = noise_reduction(waveform, sample_rate)

   <span class="hljs-comment"># Apply voice activity detection</span>
    waveform_vad = voice_activity_detection(waveform_denoised, sample_rate)

    <span class="hljs-comment"># Save VAD processed audio if the tensor is not empty</span>
    <span class="hljs-keyword">if</span> waveform_vad.numel() &gt; <span class="hljs-number">0</span>:
        output_vad = os.path.join(output_folder_vad, os.path.basename(input_audio_file))
        torchaudio.save(output_vad, waveform_vad.float().unsqueeze(<span class="hljs-number">0</span>), sample_rate)
    <span class="hljs-keyword">else</span>:
        print(<span class="hljs-string">f"Skipping <span class="hljs-subst">{input_audio_file}</span> due to empty waveform after VAD."</span>)
</code></pre>
<p><strong>Explanation</strong>: The above code performs noise reduction and voice activity detection (VAD) on audio files. It uses the <code>librosa</code> library to load audio files in WAV format, applies noise reduction using the noisereduce library, and applies VAD using the <code>webrtcvad</code> library to detect speech segments in the audio. The active speech segments are then saved to a new file in the specified output folder if they are not empty. The VAD algorithm used here is based on the WebRTC implementation and the <code>vad_window</code> parameter specifies the time (in milliseconds) for each VAD window. The code also uses the <code>torchaudio</code> library to save the processed audio files.</p>
<p>Upon initial creation, the code did not meet expectations as the custom noise reduction logic produced subpar audio quality compared to my loaded audio. After 3-4 attempts of instructing GPT-4, I was able to generate the above code which provided improved audio quality surpassing my loaded audio. I thought the result was good enough for training.</p>
<h3 id="heading-transcribing-the-control-audio">Transcribing the control audio</h3>
<p>As previously mentioned, each dysarthric audio has a corresponding control audio, which can be utilized to obtain transcriptions. Therefore, I instructed GPT-4 to use the control audio for this purpose:-</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd
<span class="hljs-keyword">import</span> speech_recognition <span class="hljs-keyword">as</span> sr
<span class="hljs-keyword">import</span> time

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">transcribe_wav</span>(<span class="hljs-params">file_path, retries=<span class="hljs-number">3</span>, timeout=<span class="hljs-number">10</span></span>):</span>
    recognizer = sr.Recognizer()
    <span class="hljs-keyword">with</span> sr.AudioFile(file_path) <span class="hljs-keyword">as</span> source:
        audio = recognizer.record(source)

    <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> range(retries):
        <span class="hljs-keyword">try</span>:
            <span class="hljs-keyword">return</span> recognizer.recognize_google(audio, language=<span class="hljs-string">'en-US'</span>, show_all=<span class="hljs-literal">False</span>)
        <span class="hljs-keyword">except</span> sr.UnknownValueError:
            print(<span class="hljs-string">f"Could not understand audio in <span class="hljs-subst">{file_path}</span>"</span>)
            <span class="hljs-keyword">return</span> <span class="hljs-string">""</span>
        <span class="hljs-keyword">except</span> sr.RequestError <span class="hljs-keyword">as</span> e:
            <span class="hljs-keyword">if</span> i &lt; retries - <span class="hljs-number">1</span>:
                print(<span class="hljs-string">f"Request failed, retrying (<span class="hljs-subst">{i + <span class="hljs-number">1</span>}</span>/<span class="hljs-subst">{retries}</span>)"</span>)
                time.sleep(<span class="hljs-number">2</span> ** i)
            <span class="hljs-keyword">else</span>:
                print(<span class="hljs-string">f"Could not request results from Google Speech Recognition service; <span class="hljs-subst">{e}</span>"</span>)
                <span class="hljs-keyword">return</span> <span class="hljs-string">""</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">save_progress</span>(<span class="hljs-params">data, output_file=<span class="hljs-string">"transcriptions.csv"</span></span>):</span>
    header = <span class="hljs-keyword">not</span> os.path.exists(output_file)
    df = pd.DataFrame(data, columns=[<span class="hljs-string">"file_name"</span>, <span class="hljs-string">"transcription"</span>])
    <span class="hljs-keyword">with</span> open(output_file, <span class="hljs-string">'a'</span>) <span class="hljs-keyword">as</span> f:
        df.to_csv(f, index=<span class="hljs-literal">False</span>, header=header)

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">load_progress</span>(<span class="hljs-params">input_file=<span class="hljs-string">"transcriptions.csv"</span></span>):</span>
    <span class="hljs-keyword">if</span> os.path.exists(input_file):
        df = pd.read_csv(input_file)
        <span class="hljs-keyword">return</span> df[<span class="hljs-string">"file_name"</span>].tolist()
    <span class="hljs-keyword">else</span>:
        <span class="hljs-keyword">return</span> []

control_folder = <span class="hljs-string">"control_train"</span>
output_file = <span class="hljs-string">"transcriptions.csv"</span>

output_data = []
processed_files = load_progress(output_file)

<span class="hljs-keyword">for</span> file <span class="hljs-keyword">in</span> os.listdir(control_folder):
    fileWithoutExtension = os.path.splitext(file)[<span class="hljs-number">0</span>]
    <span class="hljs-keyword">if</span> file.endswith(<span class="hljs-string">".wav"</span>) <span class="hljs-keyword">and</span> fileWithoutExtension <span class="hljs-keyword">not</span> <span class="hljs-keyword">in</span> processed_files:
        file_path = os.path.join(control_folder, file)

        transcription = transcribe_wav(file_path)

        <span class="hljs-keyword">if</span> transcription == <span class="hljs-string">""</span>:
            <span class="hljs-keyword">continue</span>

        <span class="hljs-comment"># remove .wav from file name</span>
        file = os.path.splitext(file)[<span class="hljs-number">0</span>]

        output_data.append((file, transcription))
        save_progress(output_data, output_file)
        print(<span class="hljs-string">f"Saved transcription for <span class="hljs-subst">{file}</span>"</span>)
</code></pre>
<p><strong>Explanation</strong>: The code is transcribing audio files in WAV format in the control_train folder using the Google Speech Recognition service. It first defines a function called <code>transcribe_wav</code> takes in the file path of an audio file and uses the <code>SpeechRecognition</code> library to recognize the speech in the audio file using the Google Speech Recognition service. If the service fails to recognize the speech, the function retries up to three times before giving up. It then defines two helper functions called <code>save_progress</code> and <code>load_progress</code> that respectively saves and loads the progress of the transcription process to and from a CSV file. Finally, it loads the list of already processed files from the CSV file, loops through each audio file in the control_train folder that has not been processed yet, transcribes the speech in the audio file using the <code>transcribe_wav</code> function saves the transcription to the CSV file using the <code>save_progress</code> function, and prints a message indicating that the transcription has been saved.</p>
<p>The transcribed data is saved in a <code>.csv</code> file in the following format:-</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>file_name</td><td>transcriptions</td></tr>
</thead>
<tbody>
<tr>
<td>F02_B1_C11_M2</td><td>trust</td></tr>
<tr>
<td>M03_UW13_C14_M6</td><td>life</td></tr>
<tr>
<td>F03_B4_B1_M6</td><td>command</td></tr>
</tbody>
</table>
</div><h3 id="heading-final-datasets-prepared">Final datasets prepared</h3>
<ul>
<li><p>Dysarthria training audio - This is a collection of various speakers and their pre-processed audio, following the cleaning process outlined earlier. There are approximately 15,000 samples in this category.</p>
</li>
<li><p>Dysarthria testing audio = This type of data differs from the above as it is not utilized in the training process. Instead, it is solely used to evaluate the accuracy of our trained model when introduced to unfamiliar data. With close to 5,000 samples falling into this category, it serves as a crucial component in evaluating the effectiveness of our model.</p>
</li>
<li><p>The <code>transcriptions.csv</code> file - This and the dysarthria <strong>training</strong> audio are in sync i.e. no transcriptions were generated from the control audio for the corresponding dysarthria <strong>testing</strong> audio.</p>
</li>
</ul>
<h2 id="heading-the-model">The Model</h2>
<p>When I presented the challenge of developing an ASR for identifying dysarthric speech to GPT-4, it recommended utilizing the Wav2vec2 model through Python code. As I encountered errors while running the code on certain data, I provided them to GPT-4, which eventually gave back the code that effectively handled unusual data lengths, shapes, and modulo operations.</p>
<p>Here is the code that involves training the model:-</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> re
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
<span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd
<span class="hljs-keyword">import</span> torch
<span class="hljs-keyword">import</span> torchaudio
<span class="hljs-keyword">from</span> torch.utils.data <span class="hljs-keyword">import</span> DataLoader, Dataset
<span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> Wav2Vec2ForCTC, Wav2Vec2Processor
<span class="hljs-keyword">from</span> torch.optim.lr_scheduler <span class="hljs-keyword">import</span> ReduceLROnPlateau
<span class="hljs-keyword">from</span> sklearn.model_selection <span class="hljs-keyword">import</span> train_test_split
<span class="hljs-keyword">from</span> torch.nn.utils.rnn <span class="hljs-keyword">import</span> pad_sequence
<span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> get_linear_schedule_with_warmup


<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">normalize_waveform</span>(<span class="hljs-params">waveform</span>):</span>
    <span class="hljs-keyword">return</span> waveform / torch.max(torch.abs(waveform))

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">contains_nan</span>(<span class="hljs-params">tensor</span>):</span>
    <span class="hljs-keyword">return</span> torch.isnan(tensor).any().item()

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">calculate_validation_loss</span>(<span class="hljs-params">validation_loader, model, device</span>):</span>
    model.eval()
    total_val_loss = <span class="hljs-number">0</span>

    <span class="hljs-keyword">with</span> torch.no_grad():
        <span class="hljs-keyword">for</span> batch <span class="hljs-keyword">in</span> validation_loader:
            input_values, labels = batch
            input_values, labels = input_values.to(device), labels.to(device)

            outputs = model(input_values, labels=labels)
            loss = outputs.loss
            total_val_loss += loss.item()

    model.train()

    <span class="hljs-keyword">return</span> total_val_loss / len(validation_loader)

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">custom_collate_fn</span>(<span class="hljs-params">batch</span>):</span>
    input_values, labels = zip(*batch)

    <span class="hljs-comment"># Remove the extra dimension from input values</span>
    input_values = [iv.squeeze(<span class="hljs-number">0</span>) <span class="hljs-keyword">for</span> iv <span class="hljs-keyword">in</span> input_values]
    labels = [l.squeeze(<span class="hljs-number">0</span>) <span class="hljs-keyword">for</span> l <span class="hljs-keyword">in</span> labels]

    <span class="hljs-comment"># Filter out empty tensors</span>
    filtered_data = [(iv, l) <span class="hljs-keyword">for</span> iv, l <span class="hljs-keyword">in</span> zip(input_values, labels) <span class="hljs-keyword">if</span> l.dim() != <span class="hljs-number">0</span>]

    input_values, labels = zip(*filtered_data)

    input_values = pad_sequence(input_values, batch_first=<span class="hljs-literal">True</span>)
    labels = pad_sequence(labels, batch_first=<span class="hljs-literal">True</span>)

    <span class="hljs-keyword">return</span> input_values, labels



<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_last_part</span>(<span class="hljs-params">filename</span>):</span>
    base_name = os.path.splitext(os.path.basename(filename))[<span class="hljs-number">0</span>]
    <span class="hljs-keyword">if</span> base_name[<span class="hljs-number">0</span>] == <span class="hljs-string">'C'</span>:
       base_name = base_name[<span class="hljs-number">1</span>:]
    <span class="hljs-keyword">return</span> base_name

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">read_transcription_file</span>(<span class="hljs-params">file_path</span>):</span>
    <span class="hljs-keyword">with</span> open(file_path, <span class="hljs-string">'r'</span>) <span class="hljs-keyword">as</span> f:
        <span class="hljs-keyword">return</span> f.read().strip()


<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">DysarthriaDataset</span>(<span class="hljs-params">Dataset</span>):</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self, audio_paths, transcriptions, processor</span>):</span>
        self.audio_paths = audio_paths
        self.transcriptions = transcriptions
        self.processor = processor

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__len__</span>(<span class="hljs-params">self</span>):</span>
        <span class="hljs-keyword">return</span> len(self.audio_paths)

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__getitem__</span>(<span class="hljs-params">self, idx</span>):</span>
        audio_path = self.audio_paths[idx]
        transcription = self.transcriptions[idx]

        <span class="hljs-comment"># Load audio and preprocess</span>
        waveform, _ = torchaudio.load(audio_path)
        waveform = normalize_waveform(waveform)
        input_values = self.processor(waveform, return_tensors=<span class="hljs-string">'pt'</span>, sampling_rate=<span class="hljs-number">16000</span>).input_values[<span class="hljs-number">0</span>]

        <span class="hljs-comment"># Process transcription</span>
        <span class="hljs-keyword">with</span> self.processor.as_target_processor():
            labels = self.processor(transcription, return_tensors=<span class="hljs-string">'pt'</span>).input_ids[<span class="hljs-number">0</span>]

        <span class="hljs-keyword">return</span> input_values, labels



processor = Wav2Vec2Processor.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-large-960h"</span>)
processor.tokenizer.do_lower_case = <span class="hljs-literal">True</span>


model = Wav2Vec2ForCTC.from_pretrained(<span class="hljs-string">"facebook/wav2vec2-large-960h"</span>)
device = torch.device(<span class="hljs-string">"cuda"</span> <span class="hljs-keyword">if</span> torch.cuda.is_available() <span class="hljs-keyword">else</span> <span class="hljs-string">"cpu"</span>)
model.to(device)


<span class="hljs-comment"># Replace with your actual file paths and transcriptions</span>
<span class="hljs-comment"># load audio paths and transcriptions</span>
audio_dir = <span class="hljs-string">"/kaggle/input/dysarthric-audio/dysarthric_train_pre-processed/dysarthric_train_pre-processed"</span>

<span class="hljs-comment"># Load the CSV file into a DataFrame</span>
csv_file = <span class="hljs-string">"/kaggle/input/dysarthric-audio/transcriptions.csv"</span>
df = pd.read_csv(csv_file, nrows=<span class="hljs-number">1500</span>)

<span class="hljs-comment"># Extract the audio paths and transcriptions from the DataFrame</span>
audio_paths = df[<span class="hljs-string">'file_name'</span>].apply(<span class="hljs-keyword">lambda</span> x: os.path.join(audio_dir, x + <span class="hljs-string">'.wav'</span>)).tolist()
transcriptions = df[<span class="hljs-string">'transcription'</span>].tolist()

<span class="hljs-comment"># Filter out audio-transcription pairs with empty transcriptions or missing audio files</span>
non_empty_pairs = [(audio, text) <span class="hljs-keyword">for</span> audio, text <span class="hljs-keyword">in</span> zip(audio_paths, transcriptions) <span class="hljs-keyword">if</span> text.strip() != <span class="hljs-string">''</span> <span class="hljs-keyword">and</span> os.path.exists(audio)]

<span class="hljs-comment"># Split the filtered pairs into separate lists of audio files and transcriptions</span>
filtered_audio_files, filtered_transcriptions = zip(*non_empty_pairs)

train_audio_files, val_audio_files, train_transcriptions, val_transcriptions = train_test_split(
    filtered_audio_files, filtered_transcriptions, test_size=<span class="hljs-number">0.1</span>, random_state=<span class="hljs-number">42</span>)

<span class="hljs-comment"># Pass the filtered transcriptions to the DysarthriaDataset</span>
train_dataset = DysarthriaDataset(train_audio_files, train_transcriptions, processor)
val_dataset = DysarthriaDataset(val_audio_files, val_transcriptions, processor)
train_loader = DataLoader(train_dataset, batch_size=<span class="hljs-number">4</span>, shuffle=<span class="hljs-literal">True</span>, collate_fn=custom_collate_fn)
val_loader = DataLoader(val_dataset, batch_size=<span class="hljs-number">4</span>, shuffle=<span class="hljs-literal">False</span>, collate_fn=custom_collate_fn)


epochs = <span class="hljs-number">5</span>
warmup_proportion = <span class="hljs-number">0.1</span>
num_training_steps = len(train_loader) * epochs
num_warmup_steps = int(warmup_proportion * num_training_steps)
optimizer = torch.optim.AdamW(model.parameters(), lr=<span class="hljs-number">5e-7</span>)

scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps, num_training_steps)

model.train()

<span class="hljs-keyword">for</span> epoch <span class="hljs-keyword">in</span> range(epochs):
    epoch_loss = <span class="hljs-number">0</span>

    <span class="hljs-keyword">for</span> batch <span class="hljs-keyword">in</span> train_loader:
        input_values, labels = batch
        input_values, labels = input_values.to(device), labels.to(device)

        optimizer.zero_grad()

        outputs = model(input_values, labels=labels)
        loss = outputs.loss
        loss.backward()

        torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=<span class="hljs-number">5.0</span>)

        optimizer.step()
        scheduler.step();
        epoch_loss += loss.item()

    val_loss = calculate_validation_loss(val_loader, model, device)
    print(<span class="hljs-string">f"Epoch <span class="hljs-subst">{epoch + <span class="hljs-number">1</span>}</span>/<span class="hljs-subst">{epochs}</span>, Train Loss: <span class="hljs-subst">{epoch_loss / len(train_loader)}</span>, Val Loss: <span class="hljs-subst">{val_loss}</span>"</span>)


model.save_pretrained(<span class="hljs-string">"/kaggle/working/dysarthria_model_1"</span>)
processor.save_pretrained(<span class="hljs-string">"/kaggle/working/dysarthria_model_1"</span>)
</code></pre>
<p><strong>Explanation</strong> The code is a Python implementation of a Wav2Vec2 model used to build an ASR (Automatic Speech Recognition) tool for dysarthric people to transcribe the words spoken. The code is divided into several functions and classes that handle various tasks such as normalizing waveform, calculating validation loss, collating data, reading transcription files, and preparing datasets.</p>
<p>The <code>processor</code> and <code>model</code> is initialized with pre-trained weights from the Wav2Vec2 model. The <code>DysarthriaDataset</code> class is defined to load and preprocess the audio files and transcriptions.</p>
<p>The <code>train_loader</code> and <code>val_loader</code> are defined as <code>DataLoader</code> objects that load the <code>train_dataset</code> and <code>val_dataset</code> respectively. The <code>train_loader</code> and <code>val_loader</code> are used to train and validate the model respectively. The <code>optimizer</code> and <code>scheduler</code> is defined to optimize the model's parameters during training.</p>
<p>The model is trained for <code>epochs</code> number of epochs with each epoch having a <code>num_training_steps</code> number of training steps. The validation loss is calculated after each epoch. The model's parameters are saved after training.</p>
<p>That's a pretty nice explanation by Hashnode's writing assistant ✨.</p>
<h2 id="heading-the-training">The Training</h2>
<p>It took some time to develop the code to its current state, working on it only during the last and current weekends. My focus was solely on instructing GPT-4 to solve errors in the code it generated, rather than brainstorming solutions. The shared dataset was not my initial starting point, as the original data lacked diversity and had low-quality transcriptions. To enhance the dataset's diversity, I prompted GPT-4 with additional information to split the data into training and testing sets while using a more robust library to improve transcription.</p>
<p>Initially, I used my laptop as my first training machine. I began with a sample size of 100, which later increased to 500. Unfortunately, I encountered strange results during training as the loss became "nan", and I felt I had hit a wall prematurely. However, the next day, I discovered that my code only read file names, not their content 🤦 . After fixing this issue, my losses produced numerical values.</p>
<p><img src="https://media.tenor.com/M3s-1uMPicoAAAAC/phew-scrubs.gif" alt="Phew Scrubs GIF - Phew Scrubs That Was Close - Discover &amp; Share GIFs" class="image--center mx-auto" /></p>
<p>I discussed this project with my friend with whom I collaborated back for the college project and he suggested to checkout <a target="_blank" href="https://www.kaggle.com/">Kaggle</a> to get free powerful <strong>GPUs</strong> to train the model. I certainly needed to do that because at one time I tried training the model with 5k samples in my machine and after even 1.5 hours, the first epoch wasn't even logged 😅.</p>
<p>So I took my whole setup to Kaggle. This happened very early on in the process, so even on Kaggle, a lot of data that wasn't polished enough got uploaded only to end up with the polished one which I stated in the final datasets section above.</p>
<p>Kaggle allows you to use powerful GPUs with some user data confirmation. I used the <strong>GPU P100</strong> on my friend's recommendation.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680460176326/ef972f13-74c8-4adb-8dda-5736b73d2772.png" alt="a screenshot of the kaggle accelerator selection dropdown showing the GPUs to select from" class="image--center mx-auto" /></p>
<p>I began with a modest 100 samples, gradually increasing to 500, and found that training proceeded quickly. Thanks to Kaggle Playground, I could train 5,000 samples in just an hour.</p>
<p>To improve my results, I switched to the pre-trained <code>facebook/wav2vec2-large-960h</code> model. While I had initially used the <code>facebook/wav2vec2-base-960h</code> model, I found it struggled to handle larger sample sizes and often omitted poorly transcribed audio or empty strings in new audio samples.</p>
<h2 id="heading-the-results">The Results</h2>
<p>As previously mentioned, my lack of domain expertise became a roadblock in my progress. This is not a new obstacle for me, as I faced similar challenges during my college days. However, I was curious to see how far GPT-4 could take me.</p>
<p><img src="https://media.tenor.com/EfnOW4LjUhwAAAAC/first-time-james-franco.gif" alt="First Time James Franco James GIF" class="image--center mx-auto" /></p>
<p>As I experimented with different no. of samples, I tested the trained model (which got saved in the training process) with different audio of dysarthric speakers. At one time, I thought that <em>"Hey this might work"</em> when I tested the following word:-</p>
<p><strong>Command</strong></p>
<ul>
<li><p>Facebook one transcribed its dysarthric audio as "C MANG"</p>
</li>
<li><p>The trained one transcribed it as "COMA"</p>
</li>
</ul>
<p>Initially, I believed this approach was effective. However, after testing the model with numerous input strings, I noticed a trend. It seemed that the model had learned to avoid having any gaps between transcribed letters. This could be due to my training the model solely on individual words rather than continuous speech.</p>
<p>Occasionally, the audio yields empty text or merely the first letters, despite the pre-trained model generating more letters appropriate but incorrect transcript.</p>
<p>I could only get the model to work with upto 2k samples. Anything more than that and the losses became negative/nan or the transcriptions worsened.</p>
<p>In certain scenarios, the trained model helped me to closely approximate the speech of dysarthric patients. For instance, if the patient spoke the word "INTO" with difficulty, the Facebook model produced "EN TOO" while the trained model generated "INTU". However, the output was not always consistent and I believe that modifying some parameters in GPT-4's instruction caused me to lose this capability after making some changes to my model.</p>
<p>There were instances of negative training and validation losses, indicating overfitting and a lack of learning in the model. This issue is exacerbated when more samples are being trained. To address this, I experimented with various factors such as <code>learning rate</code>, <code>epochs</code>, <code>optimizers</code>, and <code>gradient normalization</code> parameters. While reducing the <code>epochs</code> and <code>learning rate</code> resulted in non-negative values leaning toward 0, it also caused the model to transcribe inaccurately.</p>
<p>Upon questioning GPT-4 about my attempts, it recommended certain actions. However, I found it frustrating and unproductive as it failed to recall previous suggestions, resulting in a trial-and-error approach. Despite my repeated attempts to inform it of previous failures, I was met with the same unhelpful recommendations.</p>
<p><img src="https://media4.giphy.com/media/mWMML2LQBsj8k/giphy.gif" alt="a man in a suit and tie sitting at a desk and sighing" class="image--center mx-auto" /></p>
<h2 id="heading-the-conclusion">The Conclusion</h2>
<p>Overall, I was attempting to make something work without the necessary domain knowledge. It became clear that GPT-4 can only be effective if the user has a good understanding of the subject matter they are utilizing it for.</p>
<p>This is where it shines:-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/Lakbychance/status/1639457225340260352?s=20">https://twitter.com/Lakbychance/status/1639457225340260352?s=20</a></div>
<p> </p>
<p>Unfortunately, I am facing a challenge where substantial research is required, and I lack knowledge not only about ASRs but also about the fundamental technology behind the Wav2vec2 model and the appropriate datasets required to solve my problem. Although I have a basic understanding of what I want, it is insufficient when dealing with a complex problem without an understanding of the limitations of the methods used to solve it.</p>
<p>Some doubts that crossed my mind:-</p>
<ul>
<li><p>Is Wav2vec2 the right model selection to solve this issue?</p>
</li>
<li><p>How effective is a dataset comprising audio+transcription of single words in shaping and training a speech model, particularly one designed for continuous speech? Despite searching online for databases with continuous dysarthric speech, I couldn't find any good dataset. So, I downloaded a test file from the TORGO database, but it didn't work out. It only had single words or sometimes a sentence here and there, which didn't help me with what I was trying to do.</p>
</li>
<li><p>Have the input values for the model been appropriately transformed? Is the padding method for variable-length audio inputs satisfactory, or does it require adjustments?</p>
</li>
</ul>
<h2 id="heading-the-desire">The Desire</h2>
<p>I'm still interested to find a solution to this problem, but with a knowledgeable person in the ML space who can handle the domain-specific shenanigans. I wanted to evaluate a model like Wav2vec2 only to set up a good enough pipeline to try this with <a target="_blank" href="https://openai.com/research/whisper">Whisper</a> since it was trained on much larger data and I expect it to do better.</p>
<p>I used <code>replicate</code> to try a sample of Dysarthric audio with Whisper 2.0 for the "Command" word and it transcribed it as "come on" (temperature - 0.2)and "COMMENT!" (temperature 0.4). That's way better than what wav2vec2 came up with.</p>
<p>If anyone is interested, there is a blog post to <strong>fine-tune</strong> Whisper for multi-lingual audio:-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://huggingface.co/blog/fine-tune-whisper">https://huggingface.co/blog/fine-tune-whisper</a></div>
<p> </p>
<p>I thought maybe if I get <strong>consistent</strong> and <strong>plausible</strong> results with Wav2vec2, I could go ahead with Whisper and make something much better. I am not even sure if Whisper is made for something like this but we won't know unless we try. If you're someone who wants to solve this problem and actually can make <strong>logical/intuitive</strong> decisions regarding the whole problem statement, reach out to me. That's how I would like to solve it further.</p>
<p><img src="https://gifdb.com/images/high/this-is-the-way-mandalorian-staring-somewhere-f54pzvl46jvil8ks.gif" alt="This Is The Way GIF from The Mandalorian" /></p>
<p>Achieving actual results would have been super cool, but I refuse to waste my time relying solely on luck. When building something, understanding the reasoning behind most of the decisions is crucial. It makes it easier to maintain and build stuff upon it later.</p>
<p>Thank you for your time 🙏</p>
]]></content:encoded></item><item><title><![CDATA[Prototyping a Q&A web using OpenAI]]></title><description><![CDATA[Introduction
But first the teaser of what's finally built:-2023 certainly feels like the year of AI. Post public reveal of ChatGPT, especially, the reach of this technological revolution has reached beyond the computer folks. My cousin used ChatGPT t...]]></description><link>https://blog.lakbychance.com/prototyping-a-qa-web-using-openai</link><guid isPermaLink="true">https://blog.lakbychance.com/prototyping-a-qa-web-using-openai</guid><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Mon, 13 Feb 2023 04:05:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1676232648213/b1614b0e-00f0-41bc-8a05-edc7f03efee0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>But first the teaser of what's finally built:-<strong>2023</strong> certainly feels like the year of AI. Post public reveal of <strong>ChatGPT</strong>, especially, the reach of this technological revolution has reached beyond the computer folks. My cousin used <strong>ChatGPT</strong> to help her with articles. David Guetta is trying out <strong>AI tools now</strong>:-<strong>2023</strong> certainly feels like the year of AI. Post public reveal of <strong>ChatGPT</strong>, especially, the reach of this technological revolution has reached beyond the computer folks. My cousin used <strong>ChatGPT</strong> to help her with articles. David Guetta is trying out <strong>AI tools now</strong>:-<strong>2023</strong> certainly feels like the year of AI. Post public reveal of <strong>ChatGPT</strong>,</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676260989393/d5d49f24-185d-4e11-b884-9efacd2440a2.png" alt="coolfdfdas" class="image--center mx-auto" /></p>
<p><strong>2023</strong> certainly feels like the year of AI. Post public reveal of <strong>ChatGPT</strong>, especially, the reach of this technological revolution has reached beyond the computer folks. My cousin used <strong>ChatGPT</strong> to help her with articles. David Guetta is trying out <strong>AI tools now</strong>:-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/davidguetta/status/1621605376733872129?s=20&amp;t=8Bmz0CW3ceHwwah3SOOdPQ">https://twitter.com/davidguetta/status/1621605376733872129?s=20&amp;t=8Bmz0CW3ceHwwah3SOOdPQ</a></div>
<p> </p>
<p>Microsoft just revealed the <strong>AI supercharged Bing</strong> to take on <strong>Google</strong> (who are playing catch up using <strong>Bard</strong>). For the first time, I have felt like a AI technological revolution is truly here at the consumer level.</p>
<h3 id="heading-trying-out-something">Trying out something</h3>
<p>With all the <strong>AI</strong> fluff spreading like wildfire, it was only natural to do a hands-on. Call it <strong>FOMO</strong> or call it curiosity, there are all sorts of <strong>projects</strong> out there right now built upon <a target="_blank" href="https://platform.openai.com/docs/api-reference">Open AI APIs</a> either directly or using <a target="_blank" href="http://replicate.com">replicate</a> or some other API gateway to make it a plug-and-play solution.</p>
<pre><code class="lang-javascript">fsdfsdf
</code></pre>
<p>I thought I might probably try out something around <strong>images</strong> since it's where my <strong>creative</strong> interests lie the most. I have certainly played with <strong>midjourney</strong>, <strong>Dall-E</strong> and <strong>Stable-Diffusion</strong> to see the capabilities and <a target="_blank" href="https://midjourney.com/home"><strong>midjourney</strong></a><strong>,</strong> by far is my favorite.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/Lakbychance/status/1614624336224321536?s=20&amp;t=AABFy16KvD3DCHRX8mt5Ng">https://twitter.com/Lakbychance/status/1614624336224321536?s=20&amp;t=AABFy16KvD3DCHRX8mt5Ng</a></div>
<p> </p>
<h3 id="heading-coming-across-articles-and-code-around-qampa-with-gpt-3">Coming across articles and code around Q&amp;A with GPT-3</h3>
<p>So while exploring and following the <strong>AI</strong> developments, the <strong>quality of life</strong> improvement that I feel <strong>AI</strong> is going to bring is to turn the heat up on <strong>Q&amp;A</strong> formats that have existed for a long time. Take ChatGPT for instance. It's such a good <strong>Q&amp;A</strong> assistant. With it's pre trained data, it performs certainly well. Heck, I even used it to ship a line of code to production at Hashnode not so long ago.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://dagster.io/blog/chatgpt-langchain">https://dagster.io/blog/chatgpt-langchain</a></div>
<p> </p>
<p>The above article on <strong>dagster's</strong> blog is probably read by many folks till now. <strong>TL;DR</strong> version :-</p>
<blockquote>
<p>This article explains how to build a GitHub support bot using GPT-3, LangChain, and Python. It covers topics such as leveraging the features of a modern orchestrator (Dagster) to improve developer productivity and production robustness, Slack integration, dealing with fake sources, constructing the prompt with LangChain, dealing with documents that are too big, dealing with limited prompt window size, dealing with large numbers of documents, and caching the embeddings with Dagster to save time and money.</p>
</blockquote>
<p><strong>GPT-3</strong> is a powerful model or to be more accurate <a target="_blank" href="https://platform.openai.com/docs/models/gpt-3">a set</a> of AI models that serve various purposes. <strong>ChatGPT</strong> is based on <strong>GPT-3.5</strong> and so better at stuff than <strong>GPT-3</strong>. It's soon going to be available as an API as well.</p>
<p><strong>What are the ways to train a GPT-3 model ?</strong></p>
<blockquote>
<p>There are several ways to train a GPT-3 model, including fine-tuning, data augmentation, and vector-space search. Fine-tuning involves training the model on a specific dataset, while data augmentation involves providing additional data to the model to improve its accuracy. Vector-space search involves using a search engine to find the most relevant sources for a given query.</p>
</blockquote>
<p>A lot of technical jargon again but as far as I understand, there are <strong>two</strong> ways I like to understand how you can extend a model’s capability to do something for you:-</p>
<ul>
<li><p><a target="_blank" href="https://platform.openai.com/docs/guides/fine-tuning"><strong>Fine-tuning</strong></a><strong>aims to train</strong> a certain model with <strong>pre-trained</strong> data to get better at giving accurate answers to the user's provided prompts. Consider you want to <strong>fine-tuneGPT-3</strong> model for certain documentation you have been working on. After going through the few articles and skimming through the <strong>fine-tuning</strong> docs, I got to understand that there are costs involved in <strong>fine-tuning</strong> a model using <strong>OpenAI</strong>. Also, models like <code>text-davinci-003</code> cannot be used right now for it and so only <strong>base</strong> models can be used. Again, it's the lack of interest from my side here specifically that I haven't explored this fully. This probably might be the way to go to train on <strong>production</strong> data and get a model that performs exactly what you intend it to.</p>
</li>
<li><p><a target="_blank" href="https://docs.cohere.ai/docs/prompt-engineering"><strong>Prompt engineering</strong></a> is the process of developing a great prompt to maximize the effectiveness of a large language model like GPT-3. It's like using a model but making it aware of a limited <strong>context</strong> and only asking it to answer based on that <strong>context.</strong> This awareness of context happens at <strong>runtime, unlike fine-tuning</strong>.</p>
</li>
</ul>
<p>Here is another tweet and linked article I read recently which explores creating <strong>a Q&amp;A</strong> model for documentation using the same <strong>prompt engineering</strong>:-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/MikeEsto/status/1623106899377029121?s=20&amp;t=EsYI6qOqC06qSwetW_8S-A">https://twitter.com/MikeEsto/status/1623106899377029121?s=20&amp;t=EsYI6qOqC06qSwetW_8S-A</a></div>
<p> </p>
<p>In fact, in its final <strong>Shoutouts</strong> section, the author has added references to the dagster article I shared above and more.</p>
<p>So I also wanted to explore the <strong>prompt engineering</strong> side of things. <strong>Langchain</strong> is one of the terms you would read or hear more about when touching this territory.</p>
<blockquote>
<p><strong>Langchain</strong> helps developers combine the power of large language models (LLMs) with other sources of computation or knowledge. It includes features like Data Augmented Generation, which allows developers to provide contextual data to augment the knowledge of the LLM, and prompt engineering, which helps developers develop a great prompt to maximize the effectiveness of a large language model like GPT-3.</p>
</blockquote>
<p>In this article though, I do not dive into how <strong>langchain</strong> works. I am not even fully sure of the intricacies of how the math behind a lot of stuff behind all these <strong>AI</strong> stuff works but at the end of the day, it is all <strong>math</strong>. But one of the terms you will come across frequently when dealing with <strong>input</strong> data is <a target="_blank" href="https://platform.openai.com/docs/guides/embeddings/use-cases"><strong>embeddings</strong></a>. Embeddings help to represent words in a numerical form and can help us measure the similarities between these words. Think of it as a vocabulary that can fit a lot of words. So for big input data, you convert them into embeddings.</p>
<p>Here's is how I have understood <strong>prompt engineering to work</strong>:-</p>
<ol>
<li><p>Prepare some data you want the AI model to be aware of.</p>
</li>
<li><p>Split that data into chunks of text and convert them into embeddings. (It's good to split data into chunks because <strong>OpenAI</strong> has limits on how many embeddings can be processed by its model as well but still, that limit is almost twice of if a raw string was given to the GPT-3 competition endpoint directly)</p>
</li>
<li><p>Calculate embeddings for the question and compare those with the input data embeddings. The closest ones are chosen to give us the <strong>context</strong> or the <strong>raw string</strong> that can be fed into the <strong>completion</strong> endpoint.</p>
</li>
<li><p>There is a <strong>prompt template</strong> involved so that the model can be made aware of what it needs to look for answers in and if it's not able to find it, it can say that it doesn't know the answer instead of giving the wrong one.</p>
<p> Here is how a prompt template might look:-</p>
<blockquote>
<p>Answer the question based on the context below, and if the question can't be answered based on the context, say "I don't know"</p>
<p>Context: {context}</p>
<p>---</p>
<p>Question: {question}</p>
<p>Answer:</p>
</blockquote>
</li>
</ol>
<h3 id="heading-a-web-crawler-powered-qampa-service-with-openai">A web crawler powered Q&amp;A service with OpenAI</h3>
<p>In the <strong>dagster</strong> article, there was a point in their <strong>Future work section that</strong> stated this :-</p>
<blockquote>
<p><strong><em>Crawl web pages instead of markdown.</em></strong> It would be relatively straightforward to crawl a website’s HTML pages instead of markdown files in a GitHub repo.</p>
</blockquote>
<p>This resonated with me and while exploring all this <strong>AI</strong> stuff, specifically on the openai fine-tuning docs, at the bottom of the page, there is an examples section that contains links to the <strong>python notebooks</strong> by openai relevant to <strong>Q&amp;A</strong>. So I went through the notebooks involved, starting with this:-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/openai/openai-cookbook/blob/main/examples/fine-tuned_qa/olympics-1-collect-data.ipynb">https://github.com/openai/openai-cookbook/blob/main/examples/fine-tuned_qa/olympics-1-collect-data.ipynb</a></div>
<p> </p>
<p>The above landed me here:-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb">https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb</a></div>
<p> </p>
<p>But I think the most important finding was that <strong>openai</strong> has these cookbooks essentially. So I went to the root folder to see what all they have and that's where I landed on this:-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/openai/openai-cookbook/blob/main/apps/web-crawl-q-and-a/web-qa.ipynb">https://github.com/openai/openai-cookbook/blob/main/apps/web-crawl-q-and-a/web-qa.ipynb</a></div>
<p> </p>
<p>The above cookbook was added just a week ago and does exactly what I was wondering about.</p>
<p>In a nutshell, it did the following:-</p>
<ol>
<li><p>Crawl <a target="_blank" href="https://openai.com/">https://openai.com/</a> and generate all the text files for each of the links crawled.</p>
</li>
<li><p>Generate embeddings from all the text files by chunking them first.</p>
</li>
<li><p>The whole question and data embeddings compare step.</p>
</li>
<li><p>Then trying to answer a question via a <strong>prompt template</strong> (exactly what I mentioned before as an example).</p>
</li>
</ol>
<p>I installed <strong>python3</strong> on my system. Then copied the relevant files from GitHub. After a couple of <strong>Stack Overflow</strong> searches and setting up the <strong>OpenAI API key</strong> (paid stuff), the program ran and did exactly what was shown in the python notebook online.</p>
<p>Then everything from there was me running the same program for a couple of different sites. Started with the Next.JS blog and docs, my blog and a couple of other sites.</p>
<p>All in all, I refactored the whole code to be more reusable and utility oriented in the following manner:-</p>
<ul>
<li><p>Allowing either to form the Q&amp;A context on a single page or the crawled pages via a configurable <code>recursive</code> parameter.</p>
</li>
<li><p>Modifying the folder and file-naming code to save data for each site that is crawled.</p>
</li>
<li><p>Not crawling or creating embeddings again if already exists for a site and proceeding directly to the answer part.</p>
</li>
<li><p>Creating a minimal flask service to consume the refactored code.</p>
</li>
</ul>
<p>The whole refactor resulted in the following 5 python files:-</p>
<ul>
<li><p><a target="_blank" href="http://web.py"><code>web.py</code></a> : Managed the whole crawling and saving text files from the crawled links.</p>
</li>
<li><p><a target="_blank" href="http://utils.py"><code>utils.py</code></a>: All the utility functions being used across different files.</p>
</li>
<li><p><a target="_blank" href="http://embeddings.py"><code>embeddings.py</code></a>: Managed the whole generation and saving of the <code>embeddings.csv</code> being created.</p>
</li>
<li><p><a target="_blank" href="http://answer.py"><code>answer.py</code></a>: Managed prompt engineering bit of the code.</p>
</li>
<li><p><a target="_blank" href="http://server.py"><code>server.py</code></a>: The flask service wrapper over our controller function.</p>
</li>
<li><p><a target="_blank" href="http://index.py"><code>index.py</code></a>: The file containing the high-level code that runs when the flask service is hit.</p>
<pre><code class="lang-python">  <span class="hljs-comment"># index.py file</span>

  <span class="hljs-keyword">from</span> web <span class="hljs-keyword">import</span> crawl
  <span class="hljs-keyword">from</span> embeddings <span class="hljs-keyword">import</span> generate_url_embeddings
  <span class="hljs-keyword">from</span> answer <span class="hljs-keyword">import</span> getAnswer

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">train</span>(<span class="hljs-params">url, recursive=False</span>):</span>
      crawl(url, recursive=recursive)
      generate_url_embeddings(url, recursive=recursive)

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">ask</span>(<span class="hljs-params">question, url, recursive=False</span>):</span>
      train(url, recursive=recursive)
      <span class="hljs-keyword">return</span> getAnswer(question, url, recursive=recursive)
</code></pre>
<h3 id="heading-playing-around-with-the-created-api">Playing around with the created API</h3>
<ul>
<li><p><a target="_blank" href="http://nextjs.org/blog"><code>nextjs.org/blog</code></a> : This was created using <code>recursive:true</code> so that we can crawl all the links and build the input data from there.</p>
<ul>
<li><p>Question: In which version of Next.js, <code>unoptimized:true</code> got supported inside <code>next.config.js</code> ?</p>
<blockquote>
<p>Reply by API: Starting with Next.js 12, <code>unoptimized:true</code> got supported inside <code>next.config.js</code>.</p>
</blockquote>
</li>
<li><p>Question: What are the latest updates to middleware?</p>
<blockquote>
<p>Reply by API: With 13.1, you can now return responses from Middleware, as well as set headers on the request. These API improvements give you powerful new flexibility to customize every part of the Next.js routing lifecycle. The <code>experimental.allowMiddlewareResponseBody</code> configuration option inside next.config.js is no longer required. You can now more easily set headers on the request, as well as respond directly without having to rewrite or redirect.</p>
</blockquote>
</li>
</ul>
</li>
<li><p><strong>Fun fact:</strong> The <strong>TL;DR</strong> version of the <strong>dagster</strong> article I stated earlier was generated by this API and also the answer to the question on how we can train GPT-3 model and the bit about langchain.</p>
</li>
<li><p><a target="_blank" href="https://vercel.com/legal/privacy-policy"><code>https://vercel.com/legal/privacy-policy</code></a> : Created using <code>recursive:false</code> to just get the text for this page.</p>
<ul>
<li><p>Question: What is stated about data usage?</p>
<blockquote>
<p>We use data to administer and improve the Services, analyze trends, track users' use of the Platform, and gather broad demographic information for aggregate use. We also use Device and Usage Information to generate and process aggregated information, such as statistical or demographic data. We may also track the total number of visitors to our Platform, review Log Data, or track the number of visitors to each page of our Site. We may also use Location Information derived from IP addresses. We do not, however, collect or process End Users’ IP addresses through the use of our analytics Services.</p>
</blockquote>
</li>
</ul>
</li>
<li><p><a target="_blank" href="https://vercel.com/legal/terms"><code>https://vercel.com/legal/terms</code></a> (<code>recursive:false</code>)</p>
<ul>
<li><p>Question: Which terms are not in favor of the user?</p>
<blockquote>
<p>You agree that violations of this Agreement by yourself or any person or entity acting under your account will, in addition to any other remedies including criminal prosecution, result in the termination of your access to Vercel's website and removal (taking down) of all projects and deployments. In addition, violation of these terms or any of Vercel's policies may result in tracking information being stored to identify the offending person and permanent restriction from holding an account on Vercel's website and Services. We may shut down affected projects or deployments on the hobby plan in case of any delays or performance problems including, without limitation, those caused by a malicious attack on a project or deployment. We will notify you if your usage is unreasonable, and we may shut down and terminate projects and accounts that are creating an unreasonable burden on our infrastructure and/or business operations, as outlined in Sections 9 and 16.1 below. All fees are non-refundable, except as expressly stated otherwise in this Agreement.</p>
</blockquote>
</li>
<li><p>Question: Which terms are in favor of the user? Summarize them in less than 50 words.</p>
<blockquote>
<p>The user is granted a personal, non-sublicensable, nonexclusive, nontransferable, limited license to use the Services for their internal business or personal purposes. They are also granted a license to use and reproduce their trademarks, tradenames and logos in Vercel's marketing materials and website(s). Vercel will abide by any written trademark usage guidelines provided by the user. Vercel will also provide thirty (30) days prior notice for any changes to the fees or Agreement.</p>
</blockquote>
</li>
</ul>
</li>
<li><p>Latest featured article on hashnode - <a target="_blank" href="https://iwtyo.today/cognitive-bias-as-a-software-developer">https://iwtyo.today/cognitive-bias-as-a-software-developer</a> (<code>recursive:false)</code></p>
<ul>
<li><p>Question: Create a <strong>TL;DR</strong> version of this article</p>
<blockquote>
<p>Recognizing our own cognitive biases is essential to our lives and work. Egoless coding is a great practice to cultivate, as it helps us to accept and move on from our less-than-perfect code. To identify our skill level, we can ask our colleagues for their honest opinion. We should also break down our core skills into their most basic components and assess our proficiency in each one. Lastly, don't be afraid to seek help if you think you may be suffering from Imposter Syndrome.</p>
</blockquote>
</li>
<li><p>Question: What are all the ways to identify skills?</p>
<blockquote>
<p>Identifying your skills can be done by assessing your proficiency in each component of your core skill, asking your colleagues for their honest opinion of your capability, contrasting your point of view with others, recognizing the role others can play in helping you improve, building communication and relationships, acknowledging the skills that you have and learning the ones that you don't, and recognizing cognitive biases.</p>
</blockquote>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>All right. I might have gone overboard by stating the examples but trust me, I have run this service on a lot of sites by now. Thus the bill:-</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676227200354/66b16f10-673f-45fc-9467-1142fe083beb.png" alt class="image--center mx-auto" /></p>
<p>If you want to play around with the code, feel free to check out the following:-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/lakbychance/webAI">https://github.com/lakbychance/webAI</a></div>
<p> </p>
<h3 id="heading-limitations">Limitations</h3>
<ul>
<li><p>As the partial author of this code, I am not aware of the internal workings of how the embeddings are logically compared. I would say it's a knowledge limitation that can be overcome by someone else who is good at python and general ML stuff or by me if I get into <code>maths</code> again. For me, most of the things are a black box and work on high-level concepts.</p>
</li>
<li><p>The speed is a total bummer. It takes minutes to train a <code>recursive:true</code> site and <code>40-50s</code> for a <code>recursive:false</code> one. Again the query to the completion endpoint with the saved embeddings can take between <code>10-60s</code> . But if you're crawling a 20 mins article then it's not that big a deal.</p>
</li>
<li><p>There is a cost associated with the creation of embeddings and query completion but probably much less than <strong>fine-tuning</strong>.</p>
</li>
<li><p>Sometimes the training isn't good enough and results in a lot of "I don't know" responses even when it shouldn't have. Also, I have seen the model returning wrong answers in a few instances even with this prompt template.</p>
</li>
</ul>
<h3 id="heading-conclusionhttp19216849230007demo"><a target="_blank" href="http://192.168.49.2:30007/demo">Conclusion</a></h3>
<p><a target="_blank" href="http://192.168.49.2:30007/demo">This effo</a>rt was purely done out of curiosity and touching the <strong>AI waters</strong>. I for one am excited to try out the <strong>Bing</strong> chat assist because it fundamentally can do all the above very fast and at a much greater precision. Nevertheless, this was fun to create. <code>Python</code> isn't my work language so my command of it isn't as good. But in the era when there is <strong>GithubCopilot</strong>, I guess the lack of command gets partially hidden by it.</p>
<p>The transformation of the tech industry by AI is what I am looking forward to.</p>
<p>Thank you for your time :)</p>
]]></content:encoded></item><item><title><![CDATA[The Detective Side Of Being A Software Developer]]></title><description><![CDATA[Introduction
There are days in my job when I am implementing a new feature, fixing a bug or documenting some stuff. Most days are occupied with this stuff.
But some days are all about digging into changelog, commits, issues etc to find the cause of s...]]></description><link>https://blog.lakbychance.com/the-detective-side-of-being-a-software-developer</link><guid isPermaLink="true">https://blog.lakbychance.com/the-detective-side-of-being-a-software-developer</guid><category><![CDATA[General Programming]]></category><category><![CDATA[Developer]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[DebuggingFeb]]></category><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Fri, 03 Feb 2023 11:25:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1675420620526/75028556-9de8-4fbb-9f22-53da6ec3b464.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>There are days in my job when I am implementing a new feature, fixing a bug or documenting some stuff. Most days are occupied with this stuff.</p>
<p>But some days are all about digging into changelog, commits, issues etc to find the cause of some unexpected behaviour of the library/framework being used.</p>
<p>In this article, I will be sharing instances of when at work, I had to deal with the above.</p>
<p>Also, I am a <strong>Software Developer</strong> at <strong>Hashnode</strong> and my experiences are derived from working here. It's a crisp collection of debugging stories and tagged as part of</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://townhall.hashnode.com/debugging-feb-writeathon">https://townhall.hashnode.com/debugging-feb-writeathon</a></div>
<p> </p>
<p>Although this article won't be eligible for the writeathon, it might help you steer yourself in the right direction while solving your next bug 😉.</p>
<h3 id="heading-1-nextjs-middleware-not-working-suddenly-as-expected">1. Next.JS <code>middleware</code> not working suddenly as expected</h3>
<p>I wrote a whole article on this one for our engineering blog last year. You can check it out below:-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://engineering.hashnode.com/stuck-in-the-middleware">https://engineering.hashnode.com/stuck-in-the-middleware</a></div>
<p> </p>
<p><strong>TL;DR :-</strong></p>
<ul>
<li><p>Went to Github to check if in the last 24 hours, <strong>Next.js</strong> shipped something related to middleware.</p>
</li>
<li><p>Saw a breaking change in how query parameters behave within middleware in the latest canary release.</p>
</li>
<li><p>Realised that middleware is in Beta and isn't bound by semantic versioning. That's why the unexpected behaviour.</p>
</li>
<li><p>Fixed the code within middleware according to the latest changes.</p>
</li>
</ul>
<h3 id="heading-2-next-12-andgt-next-13-upgrade-causes-product-tour-to-fail">2. <code>Next 12</code> -&gt; <code>Next 13</code> upgrade causes product tour to fail</h3>
<ul>
<li><p>We recently upgraded the codebase powering <code>hashnode.com</code> from <code>Next 12</code> to <code>Next 13</code>.</p>
</li>
<li><p>On production, we observed that the onboarding product tour for new users isn't working correctly anymore.</p>
</li>
<li><p>Everything works correctly in local development. But I was able to replicate the same by running the <code>built</code> output locally.</p>
</li>
<li><p>This meant something was wrong with the <code>built</code> output.</p>
</li>
<li><p>I went to the library's Github Issues tab and started searching for stuff related to Next JS which led me to find the following issue :-</p>
</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/gilbarbara/react-joyride/issues/857">https://github.com/gilbarbara/react-joyride/issues/857</a></div>
<p> </p>
<ul>
<li><p>The following comment by the maintainer helped in knowing the cause and fix for the issue</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675406968944/8243b53e-57e9-48c6-9e3f-4fa75835189d.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Jumping to the discussion thread above, the <strong>swc</strong> <code>maintainer</code> had already acknowledged the issue which whose patch will land in version Next <code>13.0.7</code>. The PR with fix:-</p>
</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/swc-project/swc/pull/6509">https://github.com/swc-project/swc/pull/6509</a></div>
<p> </p>
<ul>
<li>Till then just adding <code>swcMinify:false</code> to <code>next.config.js</code> solved the issue and the built output was correct. Starting from <code>Next 13</code>, <code>swcMinify</code> was set to <code>true</code> by default unlike the previous versions and that's why we only saw this behaviour on the update.</li>
</ul>
<h3 id="heading-3-the-blog-stats-graph-started-indefinitely-shrinking-on-browser-zoom-at-90">3. The Blog Stats graph started indefinitely shrinking on browser zoom at 90%</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675407587518/9fb8c459-c177-4bdf-b162-ce13969f39e7.png" alt class="image--center mx-auto" /></p>
<ul>
<li>The first step for me here was to simply google the above title as it is which landed me here</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/chartjs/Chart.js/issues/10890">https://github.com/chartjs/Chart.js/issues/10890</a></div>
<p> </p>
<ul>
<li>The above discussion led me to the following PR which was opened to fix it</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/chartjs/Chart.js/pull/10971">https://github.com/chartjs/Chart.js/pull/10971</a></div>
<p> </p>
<ul>
<li>This fix landed on the version <code>4.1.1</code> of the <code>chart.js</code> library. So I simply updated our library version to the latest one (<code>4.2.0</code>) since that already has the patch.</li>
</ul>
<h3 id="heading-4-withaxiom-api-wrapper-increased-our-vercel-serverless-execution-gb-hours">4. <code>withAxiom</code> API wrapper increased our Vercel serverless execution GB hours</h3>
<ul>
<li><p>We use <code>axiom</code> for logging purposes at Hashnode.</p>
</li>
<li><p>In early January, we wrapped one of our <code>/api</code> handler with the <code>withAxiom</code> HOF.</p>
</li>
<li><p>We observed that since then our serverless execution GB hours <strong>SKYROCKETED!!</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675407820861/3babc57e-9262-4cc5-a7f9-1e617e6d2fae.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Further, we saw how the internal implementation of <code>withAxiom</code> was and found out that there were a couple of <code>awaits</code> that dealt with <strong>flushing the logs</strong> and increased our GB hours.</p>
</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/axiomhq/next-axiom/blob/28fd9dd530076696abeaa1f57743ab0e06c3ce4b/src/withAxiom.ts#L105">https://github.com/axiomhq/next-axiom/blob/28fd9dd530076696abeaa1f57743ab0e06c3ce4b/src/withAxiom.ts#L105</a></div>
<p> </p>
<ul>
<li>So we removed that wrapper function and saw the GB hours stabilise again. We already informed the maintainers about it and they are thinking of a solution to rectify the behaviour.</li>
</ul>
<h3 id="heading-5-unoptimizedtrue-not-working-in-nextconfigjs-as-we-expected">5. <code>unoptimized:true</code> not working in <code>next.config.js</code> as we expected</h3>
<ul>
<li><p>If you go to <a target="_blank" href="https://nextjs.org/docs/api-reference/next/legacy/image#unoptimized">next.js docs</a>, you will find how one can disable the image optimization by setting <code>unoptimized:true</code> in the <code>next.config.js</code></p>
</li>
<li><p>But the above wasn't working for us in one of the projects but surprisingly working in another.</p>
</li>
<li><p>The difference between both was the version of <code>Next JS</code> that we were running.</p>
</li>
<li><p>The one where we had <code>Next 13</code>, the above worked. The one where we had <code>Next 12.2.3</code>, it didn't.</p>
</li>
<li><p>So I dug up the <strong>releases</strong> by going to Github and found out that the support for <strong>stable unoptimized</strong> landed in version <code>12.3.0</code>. Before that, you can only replicate the same functionality by setting the flag inside the <code>experimental</code> object of <code>next.config.js</code>.</p>
</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/vercel/next.js/releases/tag/v12.3.0">https://github.com/vercel/next.js/releases/tag/v12.3.0</a></div>
<p> </p>
<ul>
<li><p>Before <code>12.3.0</code> :-</p>
<pre><code class="lang-javascript">  <span class="hljs-comment">// next.config.js </span>

  {
  <span class="hljs-attr">experimental</span>:
      {
        <span class="hljs-attr">images</span>: 
         {
          <span class="hljs-attr">unoptimized</span>:<span class="hljs-literal">true</span>
         }
      }
  }
</code></pre>
</li>
<li><p>After <code>12.3.0</code> :-</p>
<pre><code class="lang-javascript">  <span class="hljs-comment">// next.config.js </span>

  {
    <span class="hljs-attr">images</span>: 
       {
        <span class="hljs-attr">unoptimized</span>:<span class="hljs-literal">true</span>
       }
  }
</code></pre>
</li>
<li><p>Validated the same by going to the <a target="_blank" href="https://nextjs.org/blog/next-12-3#disable-image-optimization-stable">changelog blog</a> by Vercel.</p>
</li>
<li><p>So just updated the non-working project to <code>12.3.0</code> and the <code>unoptimized:true</code> started to work as we intended.</p>
</li>
</ul>
<h3 id="heading-conclusion">Conclusion</h3>
<p>It does feel like detective work. You are going through clues, clicking a link, and getting redirected to something similar. Hopping from one issue, commit to another, you finally find the line of code or PR which resolves it all for you.</p>
<p>Also, expecting oneself to do this on Day 1 of their career or a new job is probably not the smartest strategy. It takes time to get into <strong>the flow</strong> <strong>of finding patterns</strong> and I think past experiences just keep adding up to it.</p>
<p>Feel free to share similar instances of work in the comment section!</p>
<p>Thank you for your time :)</p>
]]></content:encoded></item><item><title><![CDATA[One Planet Later At Hashnode]]></title><description><![CDATA[What does the title even mean?
It took me some time to come up with this title.

If the above doesn't make sense, then we recently launched our new editor experience which we like to call Neptune. More about it below:-
https://townhall.hashnode.com/i...]]></description><link>https://blog.lakbychance.com/one-planet-later-at-hashnode</link><guid isPermaLink="true">https://blog.lakbychance.com/one-planet-later-at-hashnode</guid><category><![CDATA[Hashnode]]></category><category><![CDATA[Developer]]></category><category><![CDATA[General Programming]]></category><category><![CDATA[Learning Journey]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Sat, 03 Dec 2022 13:59:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1670075472755/-ClmjZRfA.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-what-does-the-title-even-mean">What does the title even mean?</h3>
<p>It took me some time to come up with this title.</p>
<p><img src="https://media0.giphy.com/media/3oFzm8lomYb9Q8MZs4/giphy.gif?cid=ecf05e47p3iq8ymg9uhy2izfs3movunuhy8jdnniqg31qxlg&amp;rid=giphy.gif&amp;ct=g" alt="proud i feel good GIF by MasterChef Junior" class="image--center mx-auto" /></p>
<p>If the above doesn't make sense, then we recently launched our new editor experience which we like to call <strong>Neptune</strong>. More about it below:-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://townhall.hashnode.com/introducing-neptune-our-new-wysiwyg-blog-editor">https://townhall.hashnode.com/introducing-neptune-our-new-wysiwyg-blog-editor</a></div>
<p> </p>
<p>And this is my first article using that editor (Not first technically since I am sure I have written a bunch of drafts and articles while testing it). But yes, first public article for the readers 🎉 of my blog.</p>
<p>But what is this article about?</p>
<p>Well, I completed <strong>one year</strong> at Hashnode a month ago and this article is all about reflecting on that experience. I already shared my experience in another article I wrote when I completed <strong>6 months</strong> at Hashnode. So, to catch up, kindly read the below:-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://blog.lakbychance.com/500-commits-later-at-hashnode">https://blog.lakbychance.com/500-commits-later-at-hashnode</a></div>
<p> </p>
<p>Now let's see what else was on my plate over the last 6 months:-</p>
<h3 id="heading-twitter-connect">Twitter Connect</h3>
<p>Did you know that you can connect with your Twitter tribe on Hashnode as well? We make it easy for you to find those folks and follow them instantaneously.</p>
<p>This was fun to build. I got to touch both the front and backend layers to build this. We started with one design but then shifted to a much more UX-friendly one and worked on how the user recommendations should turn up. It was fun to collaborate with <a class="user-mention" href="https://hashnode.com/@learnwithmegha">Megha Pathak</a> , <a class="user-mention" href="https://hashnode.com/@evilhead">Girish Patil</a> and <a class="user-mention" href="https://hashnode.com/@SandroVolpicella">Sandro Volpicella</a> on this.</p>
<p>You can read more about it below:-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://townhall.hashnode.com/discover-hashnode-writers-you-already-follow-on-twitter">https://townhall.hashnode.com/discover-hashnode-writers-you-already-follow-on-twitter</a></div>
<p> </p>
<h3 id="heading-migrating-our-markdown-editor-from-the-old-codebase-to-the-new-one">Migrating our markdown editor from the old codebase to the new one</h3>
<p>I think this is the first time I got to lay my hands on the markdown editor. This is easily one of the most critical parts of Hashnode experience and migrating it was certainly fun. I got to collaborate with <a class="user-mention" href="https://hashnode.com/@evilhead">Girish Patil</a> and <a class="user-mention" href="https://hashnode.com/@rajatkapoor">Rajat Kapoor</a> on this one. And unlike the <code>hashnode.com</code> migration experience, this was much better planned and executed. The important part of that execution was documenting our goals, findings, and blockers etc everything on regular basis and communicating effectively on expectations. Also, this was probably the first time, we had an internal bug bash which went smoothly and made us much more confident in shipping it to our users.</p>
<p>But why was this migration needed in the first place?</p>
<p>Well, we are moving away from our legacy codebase steadily to make use of the capabilities the new one has to offer. We wanted to make the user experience feel much faster when they landed on our editor. And we saw a massive improvement in page load speed since earlier we had to proxy our editor and the extra hop cost us <strong>seconds</strong>. We not only migrated but also fixed some bugs in the markdown editor experience along with enriching the UI for it. Overall this was a solid A+ team effort.</p>
<h3 id="heading-improving-blog-search-on-hashnode-and-personal-blogs">Improving blog search on Hashnode and personal blogs</h3>
<p>We introduced a new tab called <strong>Blogs</strong> which makes it easier for our users to search for individual/team blogs powered by Hashnode. Along with this, we also improved the article search within each blog.</p>
<h3 id="heading-adding-the-capability-to-import-devtohttpsdevto-posts-as-drafts">Adding the capability to import <a target="_blank" href="https://dev.to/">dev.to</a> posts as drafts</h3>
<p>This was requested by many of the users of our platform and we finally prioritized this. The groundwork for this was already laid down by our engineers since we supported this for RSS articles. But it was still fun to work with nevertheless because the UX was much better than before. Now the users didn't have to worry about their old articles getting posted after import. They had the choice to publish them as drafts or posts.</p>
<h3 id="heading-migrating-draft-preview-to-blogs">Migrating Draft Preview to Blogs</h3>
<p>The ownership of each article residing on your domain certainly feels good and Hashnode makes that an easy experience for the user. But there was</p>
<p><img src="https://i.ytimg.com/vi/in9SX3enCHU/maxresdefault.jpg" alt="History of 'One More Thing...' [Compilation] - YouTube" /></p>
<p>We make sure to sync your changes to a draft when you write something. But the previews of that draft used to still reside on <code>hashnode.com</code> itself. So when a user had to preview their draft article or someone else's, they landed on <code>hashnode.com/preview/[draftId]</code> and not in the writer's domain. So we had to make this experience seamless like our published articles.</p>
<p>I showed my interest to pick this up because I had a brief idea about the technical complexity of migrating this from our old codebase to our blog codebase. And after being part of two big migrations, I had enough experience to execute this one independently. And while the implementation went well, I think this got sidetracked in code reviews because we had other priority items to ship. So this stayed in code review for a whole <strong>MONTH</strong>!!. But I think everything worked out well since this affected not just our web experience but also how the mobile App flow would work with these new changes and this is probably the first time, I got to work with <a class="user-mention" href="https://hashnode.com/@iamshadmirza">Mohd Shad Mirza</a> as well.</p>
<p>Even before this got shipped, I worked on another task to ensure that a user can only create an article on hashnode if they have a blog. We wanted to push for content ownership here and unlock some capabilities that can only happen if the draft article is previewed on the user's domain (Support for custom widgets which we shipped recently for draft previews).</p>
<p>Overall, understanding the problem statement, executing it and tackling the edge cases was enjoyable while implementing this.</p>
<h3 id="heading-worked-on-something-that-we-couldnt-ship-to-users">Worked on something that we couldn't ship to users</h3>
<p>There are times when you might have spent working on something and even witnessed its completion but it couldn't go live due to a change of scope/priorities. This was a total revamp of one of our existing features where I got to collaborate with <a class="user-mention" href="https://hashnode.com/@sai_krrishhh">Sai Krishna Prasad Kandula</a> where he did most of the heavy lifting. It was fun since we whiteboarded the flow diagrams along with possible edge cases and the executions. It started as a growth experiment but soon we realized that the scope was much broader but we managed to get it all working. The work is not discarded yet but it's also not made public to our users. Let's see when it sees the light of day 🤞.</p>
<h3 id="heading-the-new-wysiwyg-editor-experience">The new WYSIWYG Editor Experience</h3>
<p>I feel this is the highlight of my work at Hashnode in the past year. I collaborated with <a class="user-mention" href="https://hashnode.com/@evilh">Girish Patil</a> , <a class="user-mention" href="https://hashnode.com/@rajatkapoor">Rajat Kapoor</a> and <a class="user-mention" href="https://hashnode.com/@UXpaul">Pritam Paul</a> on this. As good as the editor migration experience was, this was a notch above. We solved for unknowns here and that makes a difference in the impact. There is so much to write about this but that would be better as a standalone blog article.</p>
<h3 id="heading-hashnode-retreat">Hashnode Retreat</h3>
<p>This was such a great experience. Met most of our team members finally and had a fun time with them at Phuket. Again, this could use another article. But here is a tweet by our PM <a class="user-mention" href="https://hashnode.com/@sumudusiriwardana">Sumudu Siriwardana</a> showing our happy faces:-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/sumusiriwardana/status/1592107425947340800?s=20&amp;t=7t81cxwRybduIfKw9_4UMQ">https://twitter.com/sumusiriwardana/status/1592107425947340800?s=20&amp;t=7t81cxwRybduIfKw9_4UMQ</a></div>
<p> </p>
<p>Alright in the last one, I am just photobombing and confused.</p>
<h3 id="heading-few-bundled-things">Few Bundled Things...</h3>
<h4 id="heading-debugging">Debugging</h4>
<p>I like to debug issues. That's why I probably still try to solve stuff at Stack Overflow even though I am sure, <strong>ChatGPT</strong> will make me obsolete soon 🥲.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://stackoverflow.com/users/8130690/lakshya-thakur">https://stackoverflow.com/users/8130690/lakshya-thakur</a></div>
<p> </p>
<p>There are occasional bugs that as devs we have to solve and after solving the middleware bug which resulted in this <a target="_blank" href="https://engineering.hashnode.com/stuck-in-the-middleware">article</a> on our engineering blog, I faced another one where one of the drafts by a user crashed our article creation API (legacy) and also didn't let the user load that draft preview. It took me an entire day to figure out the problem here after even an hour of pair debugging with <a class="user-mention" href="https://hashnode.com/@evilhead">Girish Patil</a> .</p>
<p>The solution to this problem was very straightforward but finding the cause was not. It was perplexing, to be honest.</p>
<p>But sometimes, digging deep into such perplexing behavior can give us better returns than our assumptions about the code.</p>
<h4 id="heading-code-reviews-andamp-documentation">Code reviews &amp; Documentation</h4>
<p>Besides debugging, code reviews form an important part of our dev routine and we are trying to solve for speeding up that process. As a startup, we have to move fast while making sure we don't break things. And I feel code reviews are that one point that decides a lot of it. To fasten up these reviews, I have penned down my experience in form of some guidelines we can follow as devs to ensure basic coverage for all relevant parameters before we raise a <strong>PR</strong> for review. So a code reviewer doesn't need to do the rework here.</p>
<p>Down the line, I hope we automate many parts of it and that the improved DX helps us move <strong>fast</strong> <strong>and confidently</strong>. Also, code reviews help me collaborate with more folks than I can if I just stick to writing code.</p>
<h4 id="heading-interviews">Interviews</h4>
<p>Being able to recruit for Hashnode is another experience I enjoy. We are welcoming a new engineer soon from those sets of interviews. Assessing candidates who are passionate about what they do and what they bring to the table is a constant learning in these interviews. So, I continue to look for folks who can help us scale Hashnode to the next level.</p>
<h4 id="heading-self-assessment">Self Assessment</h4>
<p>After completing 6 months at Hashnode, I scheduled calls with our co-founders to talk more about what my career aspirations look like and what their assessment of me is. I would like to thank my Dad here for pushing me to have these discussions. Due to those, I have a much cleaner vision of what moving forward one step ahead looks like for me.</p>
<h3 id="heading-personal-commitments">Personal Commitments</h3>
<p>Okay, last time I was boasting about my fitness routine but this time I gained 4.5 kgs back again after losing more than 10 kgs 6 months ago.</p>
<p><img src="https://media0.giphy.com/media/1wXbnKYCCFXqlqzhNU/giphy.gif" alt="Large-sigh GIFs - Get the best GIF on GIPHY" class="image--center mx-auto" /></p>
<p>But before that, it's good I went on a trek with my friend and it turned out to be an impactful learning experience. We got featured here:-</p>
<iframe width="100%" height="800px" src="https://www.instagram.com/p/Ci0PgdSBzOk/embed"></iframe>

<h3 id="heading-wrapping-up"><strong>Wrapping Up</strong></h3>
<p>It's been rare that I didn't have to collaborate with <a class="user-mention" href="https://hashnode.com/@sandeep">Sandeep</a> and <a class="user-mention" href="https://hashnode.com/@fazlerocks">Syed Fazle Rahman</a> on most of the things mentioned above. These two care deeply about what's being built and bring their own founder experience to solve problems with us. We sometimes find Sandeep building something as a side quest in a couple of hours/days and Fazle implementing a bunch of design changes that we didn't even know about. They are still builders at heart.</p>
<p>The experience of <strong>ownership</strong>, <strong>impact</strong> and <strong>collaboration</strong> has surely increased since I shared my last one and I see that as a win.</p>
<p>Thank you for your time :)</p>
]]></content:encoded></item><item><title><![CDATA[Building iOS Stopwatch functionality using XState]]></title><description><![CDATA[Introduction

I have been meaning to try out xstate for long now by building a stopwatch example using it. For functionality, I referred to iOS's stopwatch in their native Clock App. 
But why stopwatch ? Well, I was once asked in an interview to buil...]]></description><link>https://blog.lakbychance.com/building-ios-stopwatch-functionality-using-xstate</link><guid isPermaLink="true">https://blog.lakbychance.com/building-ios-stopwatch-functionality-using-xstate</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[General Programming]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[xstate]]></category><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Sun, 31 Jul 2022 07:17:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1659250193594/Tm-A-Y2P9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p><img src="https://teachyourkidscode.com/wp-content/uploads/2021/02/Treasure-Map-Coding-Activity-scaled.jpg" alt="Treasure map" /></p>
<p>I have been meaning to try out <a target="_blank" href="https://github.com/statelyai/xstate">xstate</a> for long now by building a stopwatch example using it. For functionality, I referred to iOS's stopwatch in their native <strong>Clock</strong> App. </p>
<p>But why <strong>stopwatch</strong> ? Well, I was once asked in an interview to build the same and I struggled with it. For me, the struggling bit was visualizing the <strong>states</strong> the stopwatch can be. This is the code I submitted post interview (the interviewer was very considerate and told me that though I couldn't complete the implementation within time span, I can submit the solution later.) </p>
<pre><code class="lang-js"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">StopWatch</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-built_in">this</span>.timerStart = <span class="hljs-number">0</span>;
  <span class="hljs-built_in">this</span>.timerEnd = <span class="hljs-number">0</span>;
  <span class="hljs-keyword">let</span> time = { <span class="hljs-attr">hours</span>: <span class="hljs-number">0</span>, <span class="hljs-attr">minutes</span>: <span class="hljs-number">0</span>, <span class="hljs-attr">seconds</span>: <span class="hljs-number">0</span> };
  <span class="hljs-keyword">let</span> interval;

  <span class="hljs-built_in">this</span>.setTime = <span class="hljs-function"><span class="hljs-keyword">function</span> (<span class="hljs-params"></span>) </span>{
    <span class="hljs-keyword">if</span> (<span class="hljs-built_in">this</span>.timerEnd) {
      time.seconds = (<span class="hljs-built_in">this</span>.timerEnd / <span class="hljs-number">1000</span>) % <span class="hljs-number">60</span>;
      <span class="hljs-keyword">if</span> (time.seconds === <span class="hljs-number">0</span>) {
        time.minutes += <span class="hljs-number">1</span>;
      }
      <span class="hljs-keyword">if</span> (time.minutes === <span class="hljs-number">60</span>) {
        time.hours += <span class="hljs-number">1</span>;
        time.minutes = <span class="hljs-number">0</span>;
      }
    }
  };

  <span class="hljs-built_in">this</span>.getTime = <span class="hljs-function"><span class="hljs-keyword">function</span> (<span class="hljs-params"></span>) </span>{
    <span class="hljs-keyword">return</span> <span class="hljs-string">`<span class="hljs-subst">${time.hours}</span>:<span class="hljs-subst">${time.minutes}</span>:<span class="hljs-subst">${time.seconds}</span>`</span>;
  };

  <span class="hljs-built_in">this</span>.start = <span class="hljs-function"><span class="hljs-keyword">function</span> (<span class="hljs-params"></span>) </span>{
    <span class="hljs-built_in">this</span>.timerEnd = <span class="hljs-built_in">this</span>.timerStart;
    <span class="hljs-built_in">console</span>.log(<span class="hljs-built_in">this</span>.getTime());
    interval = <span class="hljs-built_in">setInterval</span>(<span class="hljs-function">() =&gt;</span> {
      <span class="hljs-built_in">this</span>.timerEnd = <span class="hljs-built_in">this</span>.timerEnd + <span class="hljs-number">1000</span>;
      <span class="hljs-built_in">this</span>.setTime();
      <span class="hljs-built_in">console</span>.log(<span class="hljs-built_in">this</span>.getTime());
    }, <span class="hljs-number">1000</span>);
  };
  <span class="hljs-built_in">this</span>.stop = <span class="hljs-function"><span class="hljs-keyword">function</span> (<span class="hljs-params"></span>) </span>{
    <span class="hljs-built_in">clearInterval</span>(interval);
    <span class="hljs-built_in">this</span>.timerStart = <span class="hljs-built_in">this</span>.timerEnd;
  };
  <span class="hljs-built_in">this</span>.reset = <span class="hljs-function"><span class="hljs-keyword">function</span> (<span class="hljs-params"></span>) </span>{
    <span class="hljs-built_in">clearInterval</span>(interval);
    <span class="hljs-built_in">this</span>.timerStart = <span class="hljs-number">0</span>;
    <span class="hljs-built_in">this</span>.timerEnd = <span class="hljs-number">0</span>;
    time = { <span class="hljs-attr">hours</span>: <span class="hljs-number">0</span>, <span class="hljs-attr">minutes</span>: <span class="hljs-number">0</span>, <span class="hljs-attr">seconds</span>: <span class="hljs-number">0</span> };
  };
}
</code></pre>
<p>Not my best code and actually doesn't feel intuitive enough. There are still no <strong>states</strong> for the stopwatch but merely functions that can <strong>start, stop or reset</strong> the timer and also redundant variables. 
So we will tackle the non-intuitiveness of this solution using <strong>xstate</strong>.</p>
<h2 id="heading-reference-point">Reference Point</h2>
<p>Here is what the iOS stopwatch looks like in it's <strong>initial</strong> state
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659247194565/yeWZ7MfD8.PNG" alt="Initial-sw.PNG" /></p>
<p>Here is what the iOS stopwatch looks like in it's <strong>running</strong> state</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659247231647/cA7OI5QnX.PNG" alt="Running-sw.PNG" /></p>
<p>Here is what the iOS stopwatch looks like in it's <strong>paused</strong> state</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659247324996/AiZ_03GTO.PNG" alt="Paused-sw.PNG" /></p>
<p>Using the app helped me land at the possible states faster.</p>
<h2 id="heading-code-playground">Code Playground</h2>
<p>Now before we proceed, I had no clue how the whole API works to create a state machine using xstate. So I went to their <a target="_blank" href="https://xstate.js.org/docs/">docs</a> and landed on the <a target="_blank" href="https://stately.ai/viz">stately visualizer</a> where one can load the default example (a machine to visualize possible <strong>states</strong> when you do a <code>fetch</code> operation). One can just click on the <strong>Visualize</strong> button and also see a neat Statechart diagram on the screen depicting the actions and states of the machine.</p>
<p>So I begin with modifying this existing machine to create the <code>stopWatchMachine</code>. </p>
<p><strong>Note</strong> :- The default code playground was in TypeScript and I stuck to it so you can ignore type specific tokens in the code I am going to share further.</p>
<h2 id="heading-building-the-stopwatch-without-lap-feature">Building the stopwatch without lap feature</h2>
<p>In the screenshots of iOS stopwatch feature, you will notice that there also is a <strong>Lap</strong> button which is used create laps to measure in-between time intervals. We will get eventually to that but for starters, let's focus on <strong>start, stop and reset</strong> functionality. </p>
<p>Alright then, let's start with laying out some code now :- </p>
<h3 id="heading-initial-state">Initial State</h3>
<pre><code><span class="hljs-keyword">import</span> { <span class="hljs-title">createMachine</span> } <span class="hljs-title"><span class="hljs-keyword">from</span></span> <span class="hljs-string">"xstate"</span>;

<span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">Context</span> </span>{
  elapsedTime: number;
}


const stopWatchMachine <span class="hljs-operator">=</span> createMachine<span class="hljs-operator">&lt;</span>Context<span class="hljs-operator">&gt;</span>({
  id: <span class="hljs-string">"stopWatch"</span>,
  initial: <span class="hljs-string">"initial"</span>, <span class="hljs-comment">// Note this can be anything - I just like to call it initial itself.</span>
  context: {
    elapsedTime: <span class="hljs-number">0</span>
  },
  states: {
    initial: {
      on: {
        PRESS_START: <span class="hljs-string">"running"</span>
      }
    }
}
})
</code></pre><p><strong>Explanation</strong> :</p>
<ul>
<li><code>initial</code> denotes what our starting state is going to be when this state machine is created.</li>
<li><code>context</code> is an object which holds the variables on which we actually want to operate. So in our case, we care about how much time has elapsed since you hit <strong>start</strong> on your stopwatch. We are measuring the value using the <code>elapsedTime</code> variable. </li>
<li>Now comes the <code>states</code> object which really forms the basis of the whole statechart paradigm. Our first state in it is <code>initial</code> which itself is an object containing an <code>on</code> key. This <code>on</code> key contains <strong>key-value</strong> pairs of <code>event : state to transition when that event happens</code> i.e. when the <code>PRESS_START</code> event happens, we want our state machine to transition to <code>running</code> state. </li>
</ul>
<p><strong>Note</strong> - Don't worry if you don't see anything like <code>running</code> state now. We will get to it in the next step. If you try to click on <strong>Visualize</strong> button at this point, the code playground will complain about <code>running</code> state not being present. </p>
<h3 id="heading-running-state">Running State</h3>
<pre><code><span class="hljs-keyword">import</span> { <span class="hljs-title">createMachine</span>, <span class="hljs-title">assign</span> } <span class="hljs-title"><span class="hljs-keyword">from</span></span> <span class="hljs-string">"xstate"</span>;

<span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">Context</span> </span>{
  elapsedTime: number;
}


const stopWatchMachine <span class="hljs-operator">=</span> createMachine<span class="hljs-operator">&lt;</span>Context<span class="hljs-operator">&gt;</span>({
  id: <span class="hljs-string">"stopWatch"</span>,
  initial: <span class="hljs-string">"initial"</span>,
  context: {
    elapsedTime: <span class="hljs-number">0</span>
  },
  states: {
    initial: {
      on: {
        PRESS_START: <span class="hljs-string">"running"</span>
      }
    },
    running: {
      on: {
        PRESS_STOP: <span class="hljs-string">"paused"</span>,
        TICK: {
          actions: assign({
            elapsedTime: (context) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
              <span class="hljs-keyword">return</span> (context.elapsedTime <span class="hljs-operator">+</span><span class="hljs-operator">=</span> <span class="hljs-number">20</span>);
            }
          })
        },
      },
      invoke: {
        src: () <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> (cb) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
          const interval <span class="hljs-operator">=</span> setInterval(() <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
            cb(<span class="hljs-string">"TICK"</span>);
          }, <span class="hljs-number">20</span>);

          <span class="hljs-keyword">return</span> () <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
            clearInterval(interval);
          };
        }
      }
    },
}
})
</code></pre><p><strong>Explanation</strong> : Sorry for bombarding you with a lot of code in one step but that's really what's happening while our stopwatch is in <strong>running</strong> state. </p>
<ul>
<li>Inside the <code>running</code> state, we have a <code>on</code> binding for <code>PRESS_STOP</code> which should bring the machine in the <code>paused</code> state. </li>
<li>Now comes the part where we start thinking about how to change the <code>elapsedTime</code> variable in a timer fashion. We know that a clock/timer works on basis of a certain <code>TICK</code> which happens every few milliseconds. So as soon as we enter this <code>running</code> state, we want to start a timer. To do this, we need to setup the <a target="_blank" href="https://xstate.js.org/docs/guides/communication.html#invoking-callbacks"><code>invoke</code></a> property first. </li>
<li>One of the ways to achieve interval ticks in JS is via the <code>setInterval</code> API. Instead of modifying the <code>elapsedTime</code> within <code>invoke</code> object, we will emit a <code>TICK</code> event every <code>20</code> milliseconds (chose this for precision) using the <code>setInterval</code> API. The <code>cb</code> is supplied by xstate and is used to send any events to the parent (in this case - <code>running</code> state is the parent). Also that <code>clearInterval</code> bit is not necessary but it's good to perform memory cleanups. </li>
<li>Alright, so now we need to listen for that emitted <code>TICK</code> event inside the <code>running</code> block also. Inside this <code>TICK</code> event, we want a certain <code>action</code> to be performed. This <code>action</code> should <code>assign</code> our <code>elapsedTime</code> context variable a certain value. So we simply increment <code>context.elapsedTime</code> by <code>20</code> on each <code>TICK</code> and return it. </li>
</ul>
<h3 id="heading-paused-state">Paused State</h3>
<pre><code><span class="hljs-keyword">import</span> { <span class="hljs-title">createMachine</span>, <span class="hljs-title">assign</span> } <span class="hljs-title"><span class="hljs-keyword">from</span></span> <span class="hljs-string">"xstate"</span>;

<span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">Context</span> </span>{
  elapsedTime: number;
}

const stopWatchMachine <span class="hljs-operator">=</span> createMachine<span class="hljs-operator">&lt;</span>Context<span class="hljs-operator">&gt;</span>({
  id: <span class="hljs-string">"stopWatch"</span>,
  initial: <span class="hljs-string">"initial"</span>,
  context: {
    elapsedTime: <span class="hljs-number">0</span>
  },
  states: {
    initial: {
      on: {
        PRESS_START: <span class="hljs-string">"running"</span>
      }
    },
    running: {
      on: {
        PRESS_STOP: <span class="hljs-string">"paused"</span>,
        TICK: {
          actions: assign({
            elapsedTime: (context) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
              <span class="hljs-keyword">return</span> (context.elapsedTime <span class="hljs-operator">+</span><span class="hljs-operator">=</span> <span class="hljs-number">20</span>);
            }
          })
        },
      },
      invoke: {
        src: () <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> (cb) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
          const interval <span class="hljs-operator">=</span> setInterval(() <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
            cb(<span class="hljs-string">"TICK"</span>);
          }, <span class="hljs-number">20</span>);

          <span class="hljs-keyword">return</span> () <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
            clearInterval(interval);
          };
        }
      }
    },
   paused: {
       on: {
        PRESS_START: <span class="hljs-string">"running"</span>,
        PRESS_RESET: {
          target: <span class="hljs-string">"initial"</span>,
          actions: assign({
            elapsedTime: (context) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
              <span class="hljs-keyword">return</span> (context.elapsedTime <span class="hljs-operator">=</span> <span class="hljs-number">0</span>);
            },
          })
        }
     }
}
}
})
</code></pre><p><strong>Explanation</strong> :</p>
<ul>
<li>In the <code>paused</code> state, currently, a user can press start to resume and go back to <code>running</code> state or a user can press reset to go back to the <code>initial</code> state. The <code>on</code> bindings depict the above statement in a declarative fashion. </li>
<li>A few more things happen inside the <code>PRESS_RESET</code> event binding. Here we want changes to <code>elapsedTime</code> as well as a state transition to <code>initial</code> when this event gets emitted. <code>context.elapsedTime</code> is reset to <code>0</code> using an <code>action</code>. </li>
</ul>
<h3 id="heading-result">Result</h3>
<p>This partially completes the building of our state machine which can help us perform <strong>start, stop and reset</strong> operations of our stopwatch. We can create a <code>stopWatchService</code> like so and start using it  :- </p>
<pre><code><span class="hljs-keyword">import</span> { <span class="hljs-title">interpret</span>} <span class="hljs-title"><span class="hljs-keyword">from</span></span> <span class="hljs-string">"xstate"</span>;

const stopWatchService <span class="hljs-operator">=</span> interpret(stopWatchMachine);
stopWatchService.start();

<span class="hljs-comment">// Now you can emit the events</span>
stopWatchService.<span class="hljs-built_in">send</span>({ <span class="hljs-keyword">type</span>: <span class="hljs-string">"PRESS_START"</span> });

(async () <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
  <span class="hljs-comment">// delay is a promise wrapper on setTimeout</span>
  await delay(<span class="hljs-number">2000</span>);
  stopWatchService.<span class="hljs-built_in">send</span>({ <span class="hljs-keyword">type</span>: <span class="hljs-string">"PRESS_STOP"</span> });
  await delay(<span class="hljs-number">2000</span>);
  stopWatchService.<span class="hljs-built_in">send</span>({ <span class="hljs-keyword">type</span>: <span class="hljs-string">"PRESS_START"</span> });
  await delay(<span class="hljs-number">2000</span>);
  stopWatchService.<span class="hljs-built_in">send</span>({ <span class="hljs-keyword">type</span>: <span class="hljs-string">"PRESS_STOP"</span> });
  await delay(<span class="hljs-number">2000</span>);
  stopWatchService.<span class="hljs-built_in">send</span>({ <span class="hljs-keyword">type</span>: <span class="hljs-string">"PRESS_RESET"</span> });
  await delay(<span class="hljs-number">2000</span>);
  stopWatchService.<span class="hljs-built_in">send</span>({ <span class="hljs-keyword">type</span>: <span class="hljs-string">"PRESS_START"</span> });
  await delay(<span class="hljs-number">5000</span>);
})();
</code></pre><p>In case you want to listen to the state changes (which is needed if you want to consume the <code>elapsedTime</code> value or see what the current <code>state</code> of machine is), you can do so using callback inside <code>onTransition</code> function :- </p>
<pre><code>const stopWatchService <span class="hljs-operator">=</span> interpret(stopWatchMachine);
stopWatchService.onTransition((state)<span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span>{
console.log(state.context.elapsedTime, state.<span class="hljs-built_in">value</span>)
})
stopWatchService.start();

<span class="hljs-comment">// Start emitting events as shown in above code block</span>
</code></pre><h2 id="heading-building-the-stopwatch-with-lap-feature">Building the stopwatch with lap feature</h2>
<p>Before be proceed, here is how the <code>Lap</code> UI looks in the iOS app :- </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659248077885/n9hHlANDG.PNG" alt="Running-lap-sw.PNG" /></p>
<p>You can see that once a user hits the <code>Lap</code> button, an entry of that lap is added to a list as shown below the timer. So this means that besides <code>elapsedTime</code>, we also have to maintain a <code>laps</code> context variable which will start as an empty <code>[]</code>. Also <code>Lap</code> button is only enabled for the user when the stopwatch is in its <code>running</code> state. This gives us enough info to add the lap feature to our state machine like so :- </p>
<pre><code><span class="hljs-keyword">import</span> { <span class="hljs-title">createMachine</span>, <span class="hljs-title">assign</span> } <span class="hljs-title"><span class="hljs-keyword">from</span></span> <span class="hljs-string">"xstate"</span>;

<span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">Context</span> </span>{
  elapsedTime: number;
  laps: Array<span class="hljs-operator">&lt;</span>{ startTime: number; elapsedTime: number }<span class="hljs-operator">&gt;</span>;
}

const stopWatchMachine <span class="hljs-operator">=</span> createMachine<span class="hljs-operator">&lt;</span>Context<span class="hljs-operator">&gt;</span>({
  id: <span class="hljs-string">"stopWatch"</span>,
  initial: <span class="hljs-string">"initial"</span>,
  context: {
    elapsedTime: <span class="hljs-number">0</span>,
    laps: []
  },
  states: {
    initial: {
      on: {
        PRESS_START: <span class="hljs-string">"running"</span>
      }
    },
    running: {
      on: {
        PRESS_STOP: <span class="hljs-string">"paused"</span>,
        TICK: {
          actions: assign({
            elapsedTime: (context) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
              <span class="hljs-keyword">return</span> (context.elapsedTime <span class="hljs-operator">+</span><span class="hljs-operator">=</span> <span class="hljs-number">20</span>);
            },
            laps: (context) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
              const laps <span class="hljs-operator">=</span> context.laps;
              const latestLap <span class="hljs-operator">=</span> laps[laps.<span class="hljs-built_in">length</span> <span class="hljs-operator">-</span> <span class="hljs-number">1</span>] ?? {
                startTime: <span class="hljs-number">0</span>,
                elapsedTime: <span class="hljs-number">0</span>
              };
              <span class="hljs-keyword">if</span> (laps.<span class="hljs-built_in">length</span> <span class="hljs-operator">=</span><span class="hljs-operator">=</span><span class="hljs-operator">=</span> <span class="hljs-number">0</span>) {
                laps.<span class="hljs-built_in">push</span>(latestLap);
              }
              latestLap.elapsedTime <span class="hljs-operator">=</span> context.elapsedTime <span class="hljs-operator">-</span> latestLap.startTime;
              <span class="hljs-keyword">return</span> laps;
            }
          })
        },
        PRESS_LAP: {
          actions: assign({
            laps: (context) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
              const laps <span class="hljs-operator">=</span> context.laps;
              const newLap <span class="hljs-operator">=</span> { startTime: context.elapsedTime, elapsedTime: <span class="hljs-number">0</span> };
              laps.<span class="hljs-built_in">push</span>(newLap);
              <span class="hljs-keyword">return</span> laps;
            }
          })
        }
      },
      invoke: {
        src: () <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> (cb) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
          const interval <span class="hljs-operator">=</span> setInterval(() <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
            cb(<span class="hljs-string">"TICK"</span>);
          }, <span class="hljs-number">20</span>);

          <span class="hljs-keyword">return</span> () <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
            clearInterval(interval);
          };
        }
      }
    },
    paused: {
      on: {
        PRESS_START: <span class="hljs-string">"running"</span>,
        PRESS_RESET: {
          target: <span class="hljs-string">"initial"</span>,
          actions: assign({
            elapsedTime: (context) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
              <span class="hljs-keyword">return</span> (context.elapsedTime <span class="hljs-operator">=</span> <span class="hljs-number">0</span>);
            },
            laps: (context) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
              <span class="hljs-keyword">return</span> (context.laps <span class="hljs-operator">=</span> []);
            }
          })
        }
      }
    }
  }
});
</code></pre><p><strong>Explanation</strong> : Holy moly, the state machine is quite bigger than before. Yes because we want to deal with setting the <code>laps</code> context variable on appropriate machine states. 
First let's see the introduction of <code>PRESS_LAP</code> event which can happen when machine is in <code>running</code> state. If that event happens, we don't transition to any other state. We just push a new lap to the the <code>laps</code> list inside our <code>action</code> block. Our <code>lap</code> data structure is an object which has two keys - </p>
<ul>
<li><code>startTime</code> - This will be equal to the total <code>elapsedTime</code> on the timer when the <code>PRESS_LAP</code> event got emitted. </li>
<li><code>elapsedTime</code> - This will be the <code>elapsedTime</code> for that <code>lap</code>. We will see its derivation next. </li>
</ul>
<p>Now, we already have a <code>TICK</code> event that happens every <code>20</code> seconds in the machine's <code>running</code> state. So now we also handle the <code>laps</code> context variable in the <code>TICK</code> event's <code>action</code> block. We only want the <code>latestLap</code> to keep respecting the dynamic timer. So we check the last element of <code>laps</code> list. If it's not there (for first lap), we insert one ourself with <code>startTime</code> as <code>0</code> and <code>elapsedTime</code> as <code>0</code>. For each <code>latestLap</code>, we need to determine its <code>elapsedTime</code>. This can be done by subtracting the <strong>latestLap's</strong> <code>startTime</code> (Remember we set it when the <code>PRESS_LAP</code> event got triggered) from the total <code>elapsedTime</code> of the timer. After setting the <code>latestLap</code>, we return the <code>laps</code> object.
Also in our <code>paused</code> state, we have the <code>PRESS_RESET</code> event definition. Earlier we only reset the <code>elapsedTime</code> to <code>0</code>. Now we will also reset <code>laps</code> to <code>[]</code>.</p>
<h2 id="heading-final-result">Final Result</h2>
<p>This finally completes the building of our state machine which can help us perform <strong>start, stop, lap and reset</strong> operations of our stopwatch. Interpreting and starting the machine will remain same. </p>
<p>Following is the visualizer diagram of our final state machine :- </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659248538372/LyzgH8EWv.png" alt="stately-viz-sw.png" /></p>
<p>Also here is a codesandbox implementation with UI as well. The UI is not polished and doesn't do justice to iOS styling. It's just to see our <code>stopWatchMachine</code> powering an UI :- </p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://codesandbox.io/s/xstate-stopwatch-vanilla-mm1r39?file=/src/index.ts:209-2011">https://codesandbox.io/s/xstate-stopwatch-vanilla-mm1r39?file=/src/index.ts:209-2011</a></div>
<h2 id="heading-closing-thoughts">Closing thoughts</h2>
<p>It's the declarative API of xstate which really abstracts that nested <code>if</code>/<code>switch</code> case complexity when one is building an application. Once the machine was setup, I liked how <strong>deterministic</strong> the behaviour was. Overall it was fun to scratch the surface of xstate by building this example. There also is a <a target="_blank" href="https://stately.ai/registry/new">low-code</a> way of making the statechart first which automatically create a machine for you. Haven't tried that but that sounds very cool. </p>
<p><img src="https://www.rawshorts.com/blog/wp-content/uploads/2019/06/stopwatch-8-gif-download-stopwatch-gif-animation.gif" alt="Stopwatch gif" /></p>
<p>Thank you for your time :)</p>
]]></content:encoded></item><item><title><![CDATA[500 Commits Later At Hashnode]]></title><description><![CDATA[Today I completed 6 months as a software developer at Hashnode 🎉.

A Bit About My Work
Most of my work has been around frontend before joining Hashnode. I started with Wipro and then joined Altran (acquired by Capgemini). Picked up few learnings fro...]]></description><link>https://blog.lakbychance.com/500-commits-later-at-hashnode</link><guid isPermaLink="true">https://blog.lakbychance.com/500-commits-later-at-hashnode</guid><category><![CDATA[Hashnode]]></category><category><![CDATA[Learning Journey]]></category><category><![CDATA[Developer]]></category><category><![CDATA[General Programming]]></category><category><![CDATA[General Advice]]></category><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Sat, 30 Apr 2022 05:35:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1651259925559/aADIl7s97.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Today I completed 6 months as a software developer at Hashnode 🎉.</p>
<p><img src="https://media.giphy.com/media/l0IykXQnucc4kPWgg/giphy.gif" alt="6 months" class="image--center mx-auto" /></p>
<h3 id="heading-a-bit-about-my-work">A Bit About My Work</h3>
<p>Most of my work has been around frontend before joining Hashnode. I started with Wipro and then joined Altran (acquired by Capgemini). Picked up few learnings from being active on Linkedin (good old linkedin), contributing to <a target="_blank" href="https://stackoverflow.com/users/8130690/lakshya-thakur">Stack Overflow</a> and making some side projects. </p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/lakbychance">https://github.com/lakbychance</a></div>
<p>Coming from a service based MNC background of 3.5 years, Hashnode, being a startup, is a breath of fresh air in many aspects. </p>
<p>I have key highlights to share around my experience and I will keep them brief :</p>
<h3 id="heading-being-part-of-hashnode-community-revamp">Being Part Of Hashnode Community Revamp</h3>
<p>My first task or to be precise, a series of tasks to do as I joined Hashnode was to become part of the hashnode.com revamp. Trust me, there were moments where I felt like we're almost done but it just got further pushed. But I liked that we didn't compromise on the quality aspect of it. From frontend to backend, we did iterative improvements, discarded designs and re-approached with better ones, worked on performance a lot in the last days. This whole thing was literally my first 3.5 months at Hashnode but I am glad I got a chance to be a part of it. Got to learn a lot from the amazing folks here. Besides the technical aspects of learning, I understood some of design and SEO shenanigans and now try to incorporate these learnings in my side projects (<a target="_blank" href="https://prings.vercel.app">Prings</a> is the latest).</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://townhall.hashnode.com/say-hello-to-the-new-hashnode">https://townhall.hashnode.com/say-hello-to-the-new-hashnode</a></div>
<h3 id="heading-creating-fun-side-project-jinglypuff">Creating Fun Side Project - JinglyPuff</h3>
<p>So amidst the community platform revamp, <a class="user-mention" href="https://hashnode.com/@fazlerocks">Syed Fazle Rahman</a> (CEO and co-founder of Hashnode) had an idea to make a very small fun project around Christmas theme. <a target="_blank" href="https://jinglypuff.hashnode.com">JinglyPuff</a> was born out of it and was really my first visible contribution as part of Hashnode since the revamp was still a secret 🤫. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1651253538988/RzJ0mnSbA.png" alt="Screen Shot 2022-04-29 at 11.00.50 PM.png" class="image--center mx-auto" /></p>
<h3 id="heading-first-fully-owned-feature-profile-hover-cards">First Fully Owned Feature - Profile Hover Cards</h3>
<p>Though community revamp had a mix of many tasks, most of those involved replicating existing functionality in a better way. Unlike those, this was my first feature development with full ownership of frontend and backend at Hashnode. Was very excited when this got shipped and yes, within 5 seconds I noticed a UI bug in prod but patched it before anybody noticed it (hopefully) and gave myself a pat. </p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/hashnode/status/1501999407713353730?s=20&amp;t=rniDzxpDsbOwlxop0n1OYQ">https://twitter.com/hashnode/status/1501999407713353730?s=20&amp;t=rniDzxpDsbOwlxop0n1OYQ</a></div>
<h3 id="heading-participating-in-code-reviews">Participating In Code Reviews</h3>
<p>Before Hashnode, I have always had my code reviewed by seniors. Here we don't have that kind of hierarchy. I like to participate in them and make sure we ship quality stuff. We are still figuring out how to solve relevant stuff with tooling and have less cognitive load for developers. Besides the technical aspect of it, I like to participate in code reviews since they are one extra point of communication in a remote setup. LGTM.</p>
<p><img src="https://i0.wp.com/blog.codacy.com/wp-content/uploads/2015/04/6a424-07lfxgwbienqvig5h.gif?resize=320%2C240&amp;ssl=1" alt="kid thumbs computer gif" class="image--center mx-auto" /></p>
<h3 id="heading-hiring-for-hashnode">Hiring for Hashnode</h3>
<p>As you know we are expanding our team and that means having many applicants and interviews. I have never been at the interviewer side of the process and here I got this opportunity to interview folks. Our process is straight forward and consists of take home assignments. Reviewing those assignments and forming follow up questions and discussing tradeoffs with the candidates have been a learning in itself. Interesting folks have recently joined us at Hashnode 🥳 and are already creating waves. </p>
<p><img src="https://media1.giphy.com/media/ouvSzoMTx6nXN6PiGu/200w.gif?cid=82a1493bd3j6616jh58oh2jwi1ylblpy8mnzqugyiuj3mitr&amp;rid=200w.gif&amp;ct=g" alt="interview dwight gif" class="image--center mx-auto" /></p>
<h3 id="heading-pitching-myself-as-part-time-video-editor">Pitching Myself As Part Time Video Editor</h3>
<p>From my first day, I made sure that folks at Hashnode know that I like to do creative stuff by letting Tim Cook wish them <em>Good Morning</em> as I unbox the work macbook</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/shorts/RKbm8bNgvmA">https://www.youtube.com/shorts/RKbm8bNgvmA</a></div>
<p>But yes, since then I go around jokingly pitching myself as part time video editor for Hashnode. I mean you gotta see the jinglypuff launch video :</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/hashnode/status/1470736935048159234?s=20&amp;t=rniDzxpDsbOwlxop0n1OYQ">https://twitter.com/hashnode/status/1470736935048159234?s=20&amp;t=rniDzxpDsbOwlxop0n1OYQ</a></div>
<h3 id="heading-learning-new-jargons-and-acronyms">Learning New Jargons And Acronyms</h3>
<p>OKRs, CTRs, MAUs, investor jargon and what not are some stuff that you can easily pick up when working at a startup. Just today I learned what <strong>ymmv</strong> (your mileage may vary) stands for. Trust me I am not a very good acronym/jargon person. I totally google them most of the times but when you are in a fast growth startup, it's normal to come across these. I am pretty sure I know the least.</p>
<h3 id="heading-making-memes-very-important">Making Memes (very important)</h3>
<p>I think our education system should now include a whole chapter on <strong>memes</strong>. I maintain a folder called <strong>memes</strong> in my work laptop and I have created 5 till now. Hoping to kid around more and inspire people. Here is the latest one :</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1651254024282/kWBswbCgK.jpeg" alt="too_damn_high_code_reviews.jpeg" class="image--center mx-auto" /></p>
<p>I mean, too much of even code reviews ain't good.</p>
<h3 id="heading-brainstorming-with-team-around-documentation-guidelines-andamp-engineering-practices">Brainstorming With Team Around Documentation, Guidelines &amp; Engineering Practices.</h3>
<p>This has increased many folds now and I love it. Every team member brings something to the table and so with more folks joining in, we have had some good discussions around engineering guidelines, documentations and processes. Also we have implemented a subset of these and hopefully by the end of the year, we would have much stronger foundations here. </p>
<p><img src="https://cdn.dribbble.com/users/783963/screenshots/3331067/rmw_ge-3.gif" alt="brainstorming_gif" class="image--center mx-auto" /></p>
<h3 id="heading-my-first-article-landed-on-hashnodes-official-engineering-blog">My First Article Landed On Hashnode's Official Engineering Blog</h3>
<p>Just this week, I got to publish the following article on our engineering blog.
It feels good to be able to talk about your engineering efforts and document them and simultaneously represent your organization. Funnily enough, this is my first featured article on Hashnode 😎.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://engineering.hashnode.com/stuck-in-the-middleware">https://engineering.hashnode.com/stuck-in-the-middleware</a></div>
<h3 id="heading-pizza-monday">Pizza Monday</h3>
<p>We had our first Pizza Monday (on a Tuesday) 2 weeks ago where I concentrated a lot on my pizza and had fun playing <a target="_blank" href="https://skribbl.io/">skribble</a> with the folks. </p>
<h3 id="heading-engineering-is-one-aspect-of-it">Engineering is One Aspect Of It</h3>
<p><img src="https://media4.giphy.com/media/5hvX5Gvh2ylDhNT1Jc/giphy.gif" alt="so much more" class="image--center mx-auto" /></p>
<p>At Hashnode weekly sync ups, all teams gather and share what they achieved this week, what's in work for the next time and more. Besides our web and mobile team updates, we discuss company's growth, investor updates, incidents &amp; recovery insights, marketing team updates, management updates, design team updates, content team updates etc. Due to this intersection of information, there is also an opportunity to learn something more. I personally learned that even though Hashnode is powered by a lot of tech, the need to scale is very much dependent on how much our users love it. And engineering alone can't solve that. That's why we have these amazing folks at Hashnode who come up with strategies and design that drives engagement and ultimately makes sure the community thrives.</p>
<h3 id="heading-making-progress">Making Progress</h3>
<p>When I came to Hashnode, I identified myself as a frontend developer but now I am steadily moving towards my own vision of being a <strong>Software Engineer</strong>. And much of the credit for this goes to all the folks at Hashnode. There always is a helping hand around and that matters a lot to me from mentorship perspective. Nobody here sets a limit on what you can or cannot do. Not that I am going around learning every possible thing (personally can get counter-productive) but with an iterative approach, I feel I am progressing and that's important.  In fact, as part of that progress, I undertook migration of one of the core components that powers search on Hashnode and a detailed article would soon land on our official engineering blog. So stay tuned ✨. Here's to unlocking more potential together</p>
<p><img src="http://pa1.narvii.com/6665/790a1c04081d99ba857c0bdb313cbe640aa5625a_00.gif" alt="ultra instinct" class="image--center mx-auto" /></p>
<h3 id="heading-personal-commitments">Personal Commitments</h3>
<p>I mean it's a startup but it's flexible. You get to decide your work timings. You can take out time for stuff that's important to you and for me, even before I joined Hashnode, I was following a set routine to exercise and that continued after I joined it. I was able to attain a good health goal this year along with my work commitments and I see that as a win. </p>
<h3 id="heading-wrapping-up">Wrapping Up</h3>
<p>So these were the highlights of my 6 months journey with Hashnode. I still remember creating an account at Hashnode in 2020 but not really using it until 2021 and within one month of usage, applying for the job and now it's a part of my daily routine, sometimes as a developer and sometimes as a blogger. </p>
<p>Here are some special mentions as part of this experience  - <a class="user-mention" href="https://hashnode.com/@vamsirao">Vamsi</a>, <a class="user-mention" href="https://hashnode.com/@learnwithmegha">Megha</a>, <a class="user-mention" href="https://hashnode.com/@evilhead">Girish</a>, <a class="user-mention" href="https://hashnode.com/@sandeep">Sandeep</a>, <a class="user-mention" href="https://hashnode.com/@fazlerocks">Fazle</a>, 
<a class="user-mention" href="https://hashnode.com/@SandroVolpicella">Sandro</a></p>
<p>Looking forward to add more people to the list and collaborate 🤝 with them.</p>
<p>Also, we are constantly iterating on what Hashnode means to people and organizations and if that excites you, apply at our <a target="_blank" href="https://careers.hashnode.com/jobs">careers</a> page. </p>
<p>See you soon. Ciao 👋</p>
]]></content:encoded></item><item><title><![CDATA[Migrating my side project from CRA to Next.js]]></title><description><![CDATA[It's a beautiful sunny day, birds are chirping, your side project is running seamlessly...

but then...... you get a $5.90 bill payment reminder for the nth time due to a s3 bucket you made on DigitalOcean Spaces that merely has 500MB data.

That's i...]]></description><link>https://blog.lakbychance.com/migrating-my-side-project-from-cra-to-nextjs</link><guid isPermaLink="true">https://blog.lakbychance.com/migrating-my-side-project-from-cra-to-nextjs</guid><category><![CDATA[Next.js]]></category><category><![CDATA[React]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[Vercel]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Mon, 04 Apr 2022 05:41:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1649050074905/PCO1lnbh4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It's a beautiful sunny day, birds are chirping, your side project is running seamlessly...</p>
<p><img src="https://media0.giphy.com/media/94OPiy03NXCiQ/200.gif" alt="Beautiful Day" class="image--center mx-auto" /></p>
<p>but then...... you get a <strong>$5.90</strong> bill payment reminder for the nth time due to a s3 bucket you made on DigitalOcean Spaces that merely has 500MB data.</p>
<p><img src="https://media3.giphy.com/media/3orifboWZvOhWnYRvq/giphy.gif" alt="Simposons Pay" class="image--center mx-auto" /></p>
<p>That's it. You have had enough. It doesn't make sense to pay at all for such a small storage. You got to put all your assets somewhere else. But before that let's have a little flashback :-</p>
<p><img src="https://i.gifer.com/GBHT.gif" alt="Elsa flashback" class="image--center mx-auto" /></p>
<p>So more than a year ago, I participated in a hackathon organised by dev.to in collaboration with DigitalOcean. We had to make an app that has to be deployed on DigitalOcean's App Platform. So I made <a target="_blank" href="https://www.animeccha.com">animeccha.com</a> which is documented as a series of blog posts starting <a target="_blank" href="https://blog.lakbychance.com/building-animeccha-prologue">here</a>. The images being served are powered by <a target="_blank" href="imagekit.io">imagekit.io</a>, the deployment by DO's App Platform and storage of those images in s3 bucket in DO's Space (this was free for starting 2-3 months due to $100 credit you get as free trial).</p>
<p>Now back to the present :-</p>
<p><img src="https://64.media.tumblr.com/8a0180ed6745d976fc2917a029bc0f5b/9f5099bd17bc9ed7-8e/s540x810/02288ff2efd1ab08c46fb3025f3bfa794d35fde3.gifv" alt="Adam Project" class="image--center mx-auto" /></p>
<p>My Image transformation was already powered by Imagekit but the storage was in a s3 bucket far far away (well not really, they have edge CDNs). So, I deleted the s3 bucket and just used the <a target="_blank" href="https://docs.imagekit.io/media-library/overview">media library</a> given by Imagekit as part of free plan with 20GB of monthly BANDWITH!!!. Lol should have done that in the first place.</p>
<p><img src="https://media0.giphy.com/media/3o85xnoIXebk3xYx4Q/200.gif" alt="idiot Gordon" class="image--center mx-auto" /></p>
<p>So just by switching to it, I started saving $5.90 a month. Srsly, why was I even paying. Such laziness, much wow. But I did one more thing that was not required at all. I mistakenly also deleted my App Platform deployment so <a target="_blank" href="animeccha.com">animeccha.com</a> was now a template site of L'ORÉAL Products. No srsly, that's what my domain vendor thought of showing as a fallback.</p>
<p><img src="https://media0.giphy.com/media/OFN4Z5MWcU4E/giphy.gif" alt="Loreal paris" class="image--center mx-auto" /></p>
<p>Anyhow, I decided that my deployment platform should also shift and I chose Vercel because I have already used their services and find the setup relatively easy. As this project also has it's custom domain - <strong>animeccha.com</strong>, I had to switch my name servers from DigitalOcean's to Vercel's in my domain's DNS records. Cool the site was now up and running on Vercel, everything is back to normal.</p>
<p><img src="https://c.tenor.com/tlfog7uiZxcAAAAC/lifes-getting-back-to-normal-back-to-normal.gif" alt="everything is normal" class="image--center mx-auto" /></p>
<p>But but....the developer urge to go further.</p>
<p>My App deals with images a lot and being a CSR (client side rendering) web app, the image loading even with placeholders wasn't the best UX. This is something I had in mind for long and now that I gained some experience in Next.js from my job (btw I work <a target="_blank" href="https://engineering.hashnode.com">@Hashnode</a>), I thought of migrating the whole thing to it and take advantage of their sweet <a target="_blank" href="https://nextjs.org/docs/api-reference/next/image">Image component</a>.</p>
<p>Their official docs have a great <a target="_blank" href="https://nextjs.org/docs/migrating/from-create-react-app">guide</a> to do so and I followed it step by side. TL;DR - Getting rid of <code>react-scripts</code>, <code>react-router-dom</code> and installing <code>next</code> and updating <code>package.json</code> scripts. Also taking advantage of their <code>pages</code> route structure for flexible routing.</p>
<p>My old routes:-</p>
<ul>
<li><p><code>/home</code> - Landing page</p>
</li>
<li><p><code>/anime/:anime</code> - Specific Anime Page</p>
</li>
<li><p><code>/anime/:anime/:montage</code> - Specific Montage of that Anime.</p>
</li>
</ul>
<p>My new routes:-</p>
<ul>
<li><p><code>/</code> - Landing page</p>
</li>
<li><p><code>/:anime</code> - Specific Anime Page</p>
</li>
<li><p><code>/:anime/:montage</code> - Specific Montage Page of that Anime.</p>
</li>
</ul>
<p>The new Next.js powered routes made much more sense. <code>/</code> and <code>/:anime</code> got transformed to purely SSR pages powered by <code>getServerSideProps</code> and <code>/:anime/:montage</code> loaded the whole <code>Montage</code> component dynamically on client side since it required a lot of interaction with browser APIs (probably more time there and something else could be worked out but it's f9 for now).</p>
<p>And while this migration was happening, I realised something. I found modifying my <strong>CSS Modules</strong> powered styling less flexible now that I have used <a target="_blank" href="https://tailwindcss.com/">Tailwind</a> at work. So yes I installed tailwind following this <a target="_blank" href="https://tailwindcss.com/docs/guides/nextjs">guide</a> and changed most of my CSS to it except the <code>Montage</code> component since it had more moving parts and I wanted to keep them intact.</p>
<p>With all this, I deployed my changes in a new branch and raised a <a target="_blank" href="https://github.com/lakbychance/animeccha/pull/1">PR</a> to the main one.</p>
<p><img src="https://thumbs.gfycat.com/CompleteMeatyKittiwake-max-1mb.gif" alt="Merge Chandler" class="image--center mx-auto" /></p>
<p>With two-three failing deployments, due to typescript errors and the fact that I forget to tell Vercel that what I am deploying is not a CRA project but a Next.js one, the project was finally live with much better UX.</p>
<p>But but....there could be</p>
<p><img src="https://thumbs.gfycat.com/AnchoredScratchyDikkops-size_restricted.gif" alt="Agent Smith More" class="image--center mx-auto" /></p>
<p>I also realised that both my <code>/</code> and <code>/:anime</code> pages are static and don't rely on server data. It was time to let them be treated likewise as well. Enter <code>getStaticProps</code> and <code>getStaticPaths</code> as part of SSG feature of Next.js. So I updated both the pages accordingly replacing <code>getServerSideProps</code> with the above functions and though not as such a visual difference, I somehow felt at ease now.</p>
<p><img src="https://c.tenor.com/nIlCrtCOSi4AAAAC/master-oogway-kung-fu-panda.gif" alt="inner peace" class="image--center mx-auto" /></p>
<p>This was also my first time making efforts to migrate an existing <strong>side project</strong>. Though it's a very simple one and really doesn't have any customers and what not, refactoring it to make give a superior UX (even just to me) was a nice experience.</p>
<h3 id="heading-thank-you-for-your-time-d">Thank you for your time :D</h3>
]]></content:encoded></item><item><title><![CDATA[A structured approach to Hacktoberfest 2021]]></title><description><![CDATA[Opensource X October - an ongoing theme to make people familiar with OSS across the world. I had my first experience last year. It was very exciting and explorative in nature.
This year 2021, I wasn't sure I would be doing it again. But then I came a...]]></description><link>https://blog.lakbychance.com/a-structured-approach-to-hacktoberfest-2021</link><guid isPermaLink="true">https://blog.lakbychance.com/a-structured-approach-to-hacktoberfest-2021</guid><category><![CDATA[Open Source]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[Hashnode]]></category><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Sun, 24 Oct 2021 10:35:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1635071237791/UlERTUvHI.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Opensource X October</strong> - an ongoing theme to make people familiar with OSS across the world. I had my first <a target="_blank" href="https://blog.lakbychance.com/tee-or-tree-fest-hacktober-dilemma-1">experience</a> last year. It was very exciting and explorative in nature.</p>
<p>This year 2021, I wasn't sure I would be doing it again. But then I came across</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Progyan1997/Operational-Transformation">https://github.com/Progyan1997/Operational-Transformation</a></div>
<p> </p>
<p>and started digging into what all it's about.</p>
<p>Amazingly, I approached the things in a more structured fashion than the last year and I would like to take you through the first 4 contributions that qualified me for <a target="_blank" href="https://hacktoberfest.digitalocean.com/profile">Hacktoberfest 2021</a> .</p>
<hr />
<h3 id="heading-1-numero-uno">1. Numero Uno</h3>
<p><strong>Operational Transformation ?</strong>....sounds like a very sci-fi concept...doesn't it ?</p>
<p><img src="https://i.pinimg.com/originals/72/ee/9a/72ee9aa8b20fca1901597e6eb812761c.gif" alt="Autobots" class="image--center mx-auto" /></p>
<p>A quick google search and I landed on the classic <a target="_blank" href="https://en.wikipedia.org/wiki/Operational_transformation">Wikipedia</a> page describing what it was.</p>
<blockquote>
<p>Operational transformation (OT) is a technology for supporting a range of collaboration functionalities in advanced collaborative software systems. OT was originally invented for consistency maintenance and concurrency control in collaborative editing of plain text documents. Its capabilities have been extended and its applications expanded to include group undo, locking, conflict resolution, operation notification and compression, group-awareness, HTML/XML and tree-structured document editing, collaborative office productivity tools, application-sharing, and collaborative computer-aided media design tools.[1] In 2009 OT was adopted as a core technique behind the collaboration features in Apache Wave and Google Docs.</p>
</blockquote>
<p>Well, the above paragraph and a bit about it in the same article gave me a high level picture of the scope of this repository. It's <strong>collaborative editing</strong> and not limited to one <strong>Database</strong> or <strong>Editor</strong> implementation. Rather, this aims to allow multiple combinations of the either of <strong>Database</strong>/<strong>Editor</strong> as a part of <strong>otjs</strong> package. So you can have a <strong>firebase</strong> database with <strong>monaco</strong> editor or a <strong>firebase</strong> database with <strong>ace</strong> editor and this is just the beginning of possibilities.</p>
<p>As someone who hasn't really tinkered with either of used <strong>Database</strong> ,<strong>Editor</strong> or <strong>Operational Transformation</strong> tech jargons used here, this was very new and kind of exciting.</p>
<p>I randomly opened a couple of code files to see what's happening to only be clueless.</p>
<p><img src="https://media1.giphy.com/media/1X7lCRp8iE0yrdZvwd/giphy.gif" alt="clueless" class="image--center mx-auto" /></p>
<p>So I took another approach here. This approach has been talked a lot by folks all over dev communities when looking forward to contribute to a new OSS. It's reading the <strong>docs</strong> and figuring out the stuff. So naturally the <strong>README</strong> was the entry point for this which got me familiar with opening issues here (via discussions and not directly for this repo) and lead me to the <strong>CONTRIBUTING</strong> markdown file with the guidelines to make a contribution.</p>
<p>This is where I found my first way to contribute. Improving the existing <strong>docs</strong>, <strong>CONTRIBUTING</strong> markdown to be specific with some linguistic nuances and correct links. Does this make me understand the code better in any way ?</p>
<p><img src="https://media2.giphy.com/media/2V5CGJWZRKtVcVBL72/giphy.gif?cid=6c09b952wuh6une6rw5gdtzplsjhcfv3ygnxooz90df54amb&amp;ep=v1_gifs_search&amp;rid=giphy.gif&amp;ct=g" alt class="image--center mx-auto" /></p>
<p>So why bother ?</p>
<p>It's the ice-breaker to start working on this repository. I ended up opening the following for this :-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Progyan1997/Operational-Transformation/discussions/63">https://github.com/Progyan1997/Operational-Transformation/discussions/63</a></div>
<p> </p>
<p>The maintainer opened up a new issue</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Progyan1997/Operational-Transformation/issues/64">https://github.com/Progyan1997/Operational-Transformation/issues/64</a></div>
<p> </p>
<p>for me to fix and as per the <strong>CONTRIBUTING</strong> markdown guidelines, I made my first PR to this repository which got merged by the maintainer.</p>
<p><img src="https://img.wattpad.com/f317dfdc1573c0536f71c9308e00c64f1bedf2f9/68747470733a2f2f696d672e776174747061642e636f6d2f73746f72795f70617274732f313034373238323332372f696d616765732f313637306362363333386633386531623636333636393039303636342e676966?s=fit&amp;h=360&amp;w=360&amp;q=80" alt="first time" class="image--center mx-auto" /></p>
<p>The PR 👇</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Progyan1997/Operational-Transformation/pull/65">https://github.com/Progyan1997/Operational-Transformation/pull/65</a></div>
<p> </p>
<hr />
<h3 id="heading-2-double-trouble">2. Double Trouble</h3>
<p>Well we have officially entered <strong>Hacktoberfest 2021</strong> even though I didn't register until I got my 3 PR's merged I believe.</p>
<p><img src="https://media3.giphy.com/media/e1s8C0YnnfjlRf7mEr/200.gif" alt="shrug" class="image--center mx-auto" /></p>
<p>So I made a very small contribution to start with and the next one should at least give me more insight into running some examples using the <a target="_blank" href="https://github.com/Progyan1997/Operational-Transformation/tree/main/packages">packages</a> in this repository.</p>
<p>And there was already a provision for examples here</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Progyan1997/Operational-Transformation/tree/main/examples/collaborative-editors#readme">https://github.com/Progyan1997/Operational-Transformation/tree/main/examples/collaborative-editors#readme</a></div>
<p> </p>
<p>But there was no <strong>README</strong> for the same which tells the user how to run this example. And this ultimately turned into another opportunity for a contribution.</p>
<p><img src="https://c.tenor.com/q4vEXq6P6zoAAAAd/there-is-an-opportunity-here-opportunity.gif" alt="opportunity" class="image--center mx-auto" /></p>
<p>This is the second discussion which took place</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Progyan1997/Operational-Transformation/discussions/66">https://github.com/Progyan1997/Operational-Transformation/discussions/66</a></div>
<p> </p>
<p>The maintainer opened up a new issue</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Progyan1997/Operational-Transformation/issues/67">https://github.com/Progyan1997/Operational-Transformation/issues/67</a></div>
<p> </p>
<p>and told me that I can pick it up if I want to. I gave it a try in my local setup and got the example running. After this, I created a <strong>README</strong> with instructions and few images that can give a new user enough information to run and validate the working of these examples. There goes my second PR to this repository.</p>
<p><img src="https://media0.giphy.com/media/UibuCmhMJwCdY1CfWA/giphy.gif" alt="second" class="image--center mx-auto" /></p>
<p>The PR 👇</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Progyan1997/Operational-Transformation/pull/71">https://github.com/Progyan1997/Operational-Transformation/pull/71</a></div>
<p> </p>
<hr />
<h3 id="heading-3-third-times-the-charm">3. Third time's the charm</h3>
<p>Till now, I actually didn't pick up any of the existing issues which were already up for grabs since most of them required technical know how. So it was time to try a <strong>small</strong> one to get my hands dirty with code.</p>
<p>I found the following issue fulfilling the criteria</p>
<p>what is happening</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Progyan1997/Operational-Transformation/issues/60">https://github.com/Progyan1997/Operational-Transformation/issues/60</a></div>
<p> </p>
<p>I basically had to add support for remote selection of text to be visible in Ace editor. Let's see what this means by example. Suppose two users are contributing to a Doc powered my Ace editor, now, when one of them highlights some text or places a cursor somewhere in the same DOC, the other should know where the cursor is and what's the highlighted/selected text.</p>
<p>And first I thought to myself that how is this task labelled <strong>small</strong> ? Wouldn't that need more lines of code ?</p>
<p><img src="https://giffiles.alphacoders.com/199/199733.gif" alt="curious" class="image--center mx-auto" /></p>
<p>My question got answered when I compared the existing code for <strong>Monaco</strong> with <strong>Ace</strong> and saw that both have almost the same implementation with code for remote cursor/selection present as well. The only thing missing from <strong>Ace</strong> was a bit of <strong>CSS</strong> implementation and that was all that needed to be figured out. The good thing is that an existing implementation helped me <strong>compare</strong> and arrive at the solution quickly.</p>
<p>I submitted my PR and the maintainer appreciated the catch and fix.</p>
<p><img src="https://y.yarn.co/5093488d-dbdd-47ab-9ab7-8b4e87599df4_text.gif" alt="3rd contribution" class="image--center mx-auto" /></p>
<p>The PR 👇</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Progyan1997/Operational-Transformation/pull/75">https://github.com/Progyan1997/Operational-Transformation/pull/75</a></div>
<p> </p>
<p>He also asked me if I can give a try to add support for a remote tooltip for <strong>Ace</strong> Editor. I did a quick comparison between <strong>Monaco</strong> and <strong>Ace</strong> implementation again and found that this would require implementing a new Class itself and should be addressed in a separate issue. The maintainer agreed and opened a new issue for the same.</p>
<hr />
<h3 id="heading-4-may-the-fourth-be-with-you">4. May the Fourth be with you</h3>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Progyan1997/Operational-Transformation/issues/76">https://github.com/Progyan1997/Operational-Transformation/issues/76</a></div>
<p> </p>
<p>The last issue helped me get familiar with a bit of code and resulted in creation of the above issue (labelled <strong>medium</strong>) which needed me to go deeper into <strong>Ace</strong> editor APIs. But I will be honest here, the editor doesn't have good docs (as far as I explored).</p>
<p><img src="https://i.pinimg.com/originals/60/dc/77/60dc775fa06ab876ce2d45e62766b8d3.gif" alt="not good" class="image--center mx-auto" /></p>
<p>It was now required to implement a tooltip that lets the current user know the username of the remote user when typing. On paper, this sounds not too different from what might have gone into implementing a remote cursor/selection.</p>
<p>But when I looked at the existing code for tooltip implementation in <strong>Monaco</strong> editor, I saw that it has a concept of <strong>ContentWidget</strong> which allows for such UI widgets to be built and maintained easily and the developer used the same to create a new <strong>CursorWidget</strong> class which takes care of adding, updating and removing the tooltip. Here comes the interesting part that <strong>Ace</strong> itself doesn't have the concept of <strong>widget</strong> in the same sense.</p>
<p>There are two types of <strong>markers</strong> though which can be added by using <code>addMarker</code> or <code>addDynamicMarker</code> API. Initially I thought maybe the first one would be enough to help us achieve the tooltip functionality but since it's meant for static markers, which are added and removed each time using <code>addMarker</code>/<code>removeMarker</code> combo, it didn't fit the tooltip use-case. The second one i.e <code>addDynamicMarker's</code> didn't have good docs and I felt less confident in using it until I found the following tooltip implementation in another OSS</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/convergencelabs/ace-collab-ext/blob/master/src/ts/AceCursorMarker.ts">https://github.com/convergencelabs/ace-collab-ext/blob/master/src/ts/AceCursorMarker.ts</a></div>
<p> </p>
<p>After reading the code and trying to implement the same kind of functionality a couple of times while ensuring to follow the <strong>CursorWidget</strong> template, I finally got around getting the tooltip implementation to work with Ace and proposed adding debouncing as well to the tooltip announcements for a better UX. Figuring out all of this took at least 2 full days and I was happy with the end result.</p>
<p><img src="https://64.media.tumblr.com/d028bf7ab190f50806853f6a08162840/7090a65e5cbe52fd-1f/s500x750/f5d0a0b11cf3897b1ec10904b119ea349d33e05c.gifv" alt="Number 4" class="image--center mx-auto" /></p>
<p>The PR 👇</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Progyan1997/Operational-Transformation/pull/81">https://github.com/Progyan1997/Operational-Transformation/pull/81</a></div>
<p> </p>
<hr />
<p>If you notice the order of issues, you would see that this approach was progressive going from no-code to code contributions within 4 PRs. I know the maintainer from Linkedin and have interacted with him in DMs and comments a couple of times. So when I came across his post that he is looking for contributors to his new repo, I thought let's give it a shot !</p>
<p>While contributing this time, besides the problem solving and technical learning, I got to know about <a target="_blank" href="https://gist.github.com/joshbuchea/6f47e86d2510bce28f8e7f42ae84c716">Semantic Commit Messages</a>, <a target="_blank" href="https://fle.github.io/git-tip-keep-your-branch-clean-with-fixup-and-autosquash.html">--fixup</a> flag, a gist of Github Discussions and creating issues from the same.</p>
<p>Besides the above 4 PRs, I contributed 3 more to the same repo with minor test updates, code refactoring etc. Also I came across the following to contribute JS interview questions and submitted 3 PRs for the same.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/devkodeio/javascript-interview-questions">https://github.com/devkodeio/javascript-interview-questions</a></div>
<p> </p>
<p>Overall I enjoyed the structured approach this time.</p>
<p><img src="https://i.gifer.com/74Uh.gif" alt="smirk" class="image--center mx-auto" /></p>
<h2 id="heading-thank-you-for-your-time-d">Thank you for your time :D</h2>
]]></content:encoded></item><item><title><![CDATA[ZigZag ⚡ traverse that binary tree 🌲]]></title><description><![CDATA[Today we are going to solve two DS problems that are actually very similar. 
They are respectively :-

level order traversal in binary tree
zigzag order traversal in binary tree

And I like to solve them in a specific way which comes natural to me. 
...]]></description><link>https://blog.lakbychance.com/zigzag-traverse-that-binary-tree</link><guid isPermaLink="true">https://blog.lakbychance.com/zigzag-traverse-that-binary-tree</guid><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Sun, 26 Sep 2021 14:26:21 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1632666009015/SFojS2_xMg.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Today we are going to solve two DS problems that are actually very similar. 
They are respectively :-</p>
<ul>
<li><a target="_blank" href="https://leetcode.com/problems/binary-tree-level-order-traversal/">level order traversal in binary tree</a></li>
<li><a target="_blank" href="https://leetcode.com/problems/binary-tree-zigzag-level-order-traversal/">zigzag order traversal in binary tree</a></li>
</ul>
<p>And I like to solve them in a specific way which comes natural to me. </p>
<hr />
<h3 id="heading-binary-tree-level-order-traversal">Binary Tree Level Order Traversal</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1632665250474/Y3ysQegoi.gif" alt="Hashnode_level_order_traversal.gif" /></p>
<p>Here is the code which works :-</p>
<pre><code class="lang-js"><span class="hljs-keyword">var</span> levelOrder = <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">root</span>) </span>{
    <span class="hljs-keyword">if</span>(!root)
        <span class="hljs-keyword">return</span> [];
    <span class="hljs-keyword">let</span> currentQueue = [];
    <span class="hljs-keyword">let</span> nextNodes = [];
    <span class="hljs-keyword">const</span> output = [[root.val]];
    currentQueue.push(root);
    <span class="hljs-keyword">while</span>(currentQueue.length){
        <span class="hljs-keyword">const</span> node = currentQueue.shift();
        nextNodes = nextNodes.concat(findNextNodes(node));
        <span class="hljs-keyword">if</span>(!currentQueue.length &amp;&amp; nextNodes.length){
            currentQueue = nextNodes;
            nextNodes = [];
            output.push(currentQueue.map(<span class="hljs-function"><span class="hljs-params">node</span>=&gt;</span>node.val));
        }
    }
    <span class="hljs-keyword">return</span> output;
};

<span class="hljs-keyword">const</span> findNextNodes=<span class="hljs-function">(<span class="hljs-params">node</span>)=&gt;</span>{
    <span class="hljs-keyword">const</span> nextNodes = [];
    <span class="hljs-keyword">if</span>(node.left){
        nextNodes.push(node.left);
    }
    <span class="hljs-keyword">if</span>(node.right){
        nextNodes.push(node.right);
    }
    <span class="hljs-keyword">return</span> nextNodes;
}
</code></pre>
<p><strong>Explanation :-</strong></p>
<ul>
<li>First we check whether <code>root</code> of binary tree exists or not. If not, simply return empty array <code>[]</code>.</li>
<li>Initialize a <code>currentQueue</code> and <code>nextNodes</code> array. </li>
<li>Initialize a <code>output</code> 2D array where for each level we maintain an array consisting of the level nodes. Initially, the <code>[root.val]</code> will be the first valid entry. So <code>output</code> will be <code>[[3]]</code>.</li>
<li>And then push the current <code>root</code> node inside <code>currentQueue</code>. So <code>currentQueue</code> will be <code>[Node(3)]</code>.</li>
<li>Now, we will loop over the <code>currentQueue</code> until it is empty. </li>
<li>Being a queue, we <strong>dequeue</strong> the first entry in it using <code>shift</code> operator on <code>currentQueue</code>.</li>
<li>Now, we pass this entry or <code>node</code>(<code>Node(3)</code>) inside <code>findNextNodes</code> function whose purpose it to return us a list of children of the passed <code>node</code>. </li>
<li><code>findNextNodes</code> initializes an empty <code>nextNodes</code> array and pushes the <code>left</code> and <code>right</code> children of <code>node</code> if they exist and returns the <code>nextNodes</code> array. <code>nextNodes</code> will be <code>[Node(9),Node(20)]</code>.</li>
<li>Then the returned array is concatenated with <code>nextNodes</code>. <code>nextNodes</code> will be <code>[Node(9),Node(20)]</code>.</li>
<li>Now before the end of the loop, if <code>currentQueue</code> is empty and <code>nextNodes</code> is not, <code>currentQueue</code> starts referring the <code>nextNodes</code> array and <code>nextNodes</code> is reinitialized to <code>[]</code>. Also, all the <code>node.val</code> values in the <code>currentQueue</code> are pushed in the form of an array to <code>output</code>. By this step, output will be <code>[[3],[9,20]]</code>. </li>
<li>This way the for each level we will successfully obtain the <strong>left to right</strong> level order traversal in the <code>output</code> array. </li>
</ul>
<hr />
<h3 id="heading-binary-tree-zigzag-level-order-traversal">Binary Tree Zigzag Level Order Traversal</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1632665262182/7MFrPhUfe.gif" alt="Hashnode_zigzag_level_order_traversal.gif" /></p>
<p>Before we see the code and explanation, it's important to note that we will be converting the above solution to obtain the current one. I would like to stress on this derivation because this is how I particularly enjoyed the process of arriving at this solution. We will do it in two iterations so that the process of arriving at this solution seems more intuitive. </p>
<p>Alright, here is the first iteration :-</p>
<pre><code class="lang-js"><span class="hljs-keyword">var</span> zigzagLevelOrder = <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">root</span>) </span>{
    <span class="hljs-keyword">if</span>(!root)
        <span class="hljs-keyword">return</span> [];
    <span class="hljs-keyword">let</span> currentQueue = [];
    <span class="hljs-keyword">let</span> nextNodes = [];
    <span class="hljs-keyword">let</span> isNextLevelEven = <span class="hljs-literal">true</span>;
    <span class="hljs-keyword">const</span> output = [[root.val]];
    currentQueue.push(root);
    <span class="hljs-keyword">while</span>(currentQueue.length){
        <span class="hljs-keyword">const</span> node = currentQueue.shift();
        nextNodes = nextNodes.concat(findNextNodes(node,isNextLevelEven));
        <span class="hljs-keyword">if</span>(!currentQueue.length &amp;&amp; nextNodes.length){
            currentQueue = nextNodes;
            nextNodes = [];
            isNextLevelEven = !isNextLevelEven;
            output.push(currentQueue.map(<span class="hljs-function"><span class="hljs-params">node</span>=&gt;</span>node.val));
        }
    }
    <span class="hljs-keyword">return</span> output;
};

<span class="hljs-keyword">const</span> findNextNodes=<span class="hljs-function">(<span class="hljs-params">node,isNextLevelEven</span>)=&gt;</span>{
    <span class="hljs-keyword">const</span> nextNodes = [];
    <span class="hljs-keyword">if</span>(node.left){
        nextNodes.push(node.left);
    }
    <span class="hljs-keyword">if</span>(node.right){
        nextNodes.push(node.right);
    }
    <span class="hljs-keyword">return</span> isNextLevelEven?nextNodes.reverse():nextNodes;
}
</code></pre>
<p><strong>Explanation :-</strong></p>
<ul>
<li>As soon as you see the above code, you will realize that all it does is add subtle nuances to <strong>level order traversal's</strong> code. </li>
<li>The first nuance is the <code>isNextLevelEven</code> variable. So let's discuss it. When we think about a <strong>zigzag</strong> traversal, it starts on <strong>level 1</strong> i.e. <code>root</code> node from <strong>left to right</strong> and then on <strong>level 2</strong> from <strong>right to left</strong> and the again from <strong>left to right</strong> on <strong>level 3</strong>. So there is a <strong>alternating</strong> state that's being introduced. Alternating state can be depicted using a <strong>boolean</strong> and that's what we are doing here. If <code>root</code> is <strong>level 1</strong> i.e. an <strong>odd</strong> level, then the <strong>next level</strong> would be <strong>even</strong>. This is why I have initialized <code>isNextLevelEven</code> to <code>true</code> in the starting of this code. </li>
<li>Before the start of the loop, <code>output</code> will be <code>[[3]]</code> and <code>currentQueue</code> will be <code>[Node(3)]</code>.</li>
<li>Now, we will loop over the <code>currentQueue</code> until it is empty. </li>
<li>Being a queue, we <strong>dequeue</strong> the first entry in it using <code>shift</code> operator on <code>currentQueue</code>.</li>
<li>Now, we pass <code>node</code>(<code>Node(3)</code>) and <code>isNextLevelEven</code> inside <code>findNextNodes</code> function whose purpose it to return us a list of children of the passed <code>node</code> according to <code>isNextLevelEven</code> boolean variable. </li>
<li><code>findNextNodes</code> like before initializes an empty <code>nextNodes</code> array and pushes the <code>left</code> and <code>right</code> children of <code>node</code> if they exist. The change comes in the <strong>returning</strong> value step where now exists a <strong>ternary</strong> condition where <code>isNextLevelEven</code> if true will return the <strong>reversed</strong> <code>nextNodes</code> (for right to left) array else the <code>nextNodes</code> (for left to right) as it is . In this step, since <code>isNextLevelEven</code> is <code>true</code>, the returned value will be <code>[Node(20),Node(9)]</code>.</li>
<li>Then the returned array is concatenated with <code>nextNodes</code>. <code>nextNodes</code> will be <code>[Node(20),Node(9)]</code>.</li>
<li>Now before the end of the loop, if <code>currentQueue</code> is empty and <code>nextNodes</code> is not, <code>currentQueue</code> starts referring the <code>nextNodes</code> array and <code>nextNodes</code> is reinitialized to <code>[]</code>.  We also flip the <code>isNextLevelEven</code> boolean. Also, all the <code>node.val</code> values in the <code>currentQueue</code> are pushed in the form of an array to <code>output</code>. By this step, output will be <code>[[3],[20,9]]</code>. </li>
<li>The algorithm seems reasonable till here. But let's see what happens in next iteration of while loop.</li>
<li>Now <code>currentQueue</code> being <code>[Node(20),Node(9)]</code>, <code>node</code> will be <code>Node(20)</code> due to <code>shift</code> operation.</li>
<li>Now since next level is 3 which is <strong>odd</strong>, <code>[Node(15),Node(7)]</code> will be concatenated with <code>nextNodes</code>. </li>
<li>And again in next iteration of while loop, <code>node</code> will be <code>Node(9)</code> due to <code>shift</code> operation resulting in <code>nextNodes</code> being <code>[Node(15),Node(7),Node(23),Node(43)]</code>. </li>
<li>Now when the <code>if</code> condition inside the <code>while</code> loop holds true, <code>output</code> will become <code>[[3],[20,9],[15,7,23,43]]</code> which clearly is wrong because the last level is not traversed from left to right as we wanted to. </li>
</ul>
<p>The step where we went wrong here is assuming that <code>currentQueue</code> well has to be a <strong>queue</strong>. Being a <strong>queue</strong>, we will never be able to start a traversal from the <strong>last</strong> element. For instance, after we got correct <code>output</code> till <strong>level 2</strong>, we would have wanted, that for <strong>level 3</strong>, first the <strong>children</strong> of <code>Node(9)</code> were pushed from <strong>left to right</strong> and then for <code>Node(20)</code> but instead the reverse happened. This is literally how I formed the second intuition that it's my <strong>last</strong> element from which the <strong>next level</strong> of nodes should be traversed. And the Data Structure which operates primarily on the <strong>last</strong> element is none other than a <strong>stack</strong>. </p>
<p>So all we need is to replace the <code>shift</code> operation with <code>pop</code> and things will work as we intend to. Obviously, it's better to rename <code>currentQueue</code> to <code>currentStack</code> for brevity. </p>
<p>So here is the working code :-</p>
<pre><code class="lang-js"><span class="hljs-keyword">var</span> zigzagLevelOrder = <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">root</span>) </span>{
    <span class="hljs-keyword">if</span>(!root)
        <span class="hljs-keyword">return</span> [];
    <span class="hljs-keyword">let</span> currentStack = [];
    <span class="hljs-keyword">let</span> nextNodes = [];
    <span class="hljs-keyword">let</span> isNextLevelEven = <span class="hljs-literal">true</span>;
    <span class="hljs-keyword">const</span> output = [[root.val]];
    currentStack.push(root);
    <span class="hljs-keyword">while</span>(currentStack.length){
        <span class="hljs-keyword">const</span> node = currentStack.pop();
        nextNodes = nextNodes.concat(findNextNodes(node,isNextLevelEven));
        <span class="hljs-keyword">if</span>(!currentStack.length &amp;&amp; nextNodes.length){
            currentStack = nextNodes;
            nextNodes = [];
            isNextLevelEven = !isNextLevelEven;
            output.push(currentStack.map(<span class="hljs-function"><span class="hljs-params">node</span>=&gt;</span>node.val));
        }
    }
    <span class="hljs-keyword">return</span> output;
};

<span class="hljs-keyword">const</span> findNextNodes=<span class="hljs-function">(<span class="hljs-params">node,isNextLevelEven</span>)=&gt;</span>{
    <span class="hljs-keyword">const</span> nextNodes = [];
    <span class="hljs-keyword">if</span>(node.left){
        nextNodes.push(node.left);
    }
    <span class="hljs-keyword">if</span>(node.right){
        nextNodes.push(node.right);
    }
    <span class="hljs-keyword">return</span> isNextLevelEven?nextNodes.reverse():nextNodes;
}
</code></pre>
<hr />
<p>The second question was asked to me in a startup's interview in early stage of my career. That time I didn't know JS and was interviewing for a Java Dev position. I didn't do well in that interview and kind of dreaded this problem. Later one day while solving the <strong>level order traversal</strong> on leetcode, I realised that the <strong>zigzag</strong> problem is an extension of this and then implemented it in same manner as I documented in this article. That <strong>connection</strong> between both problems was cool to arrive at. </p>
<h3 id="heading-thank-you-for-your-time-d">Thank you for your time :D</h3>
]]></content:encoded></item><item><title><![CDATA[You can log function names this way]]></title><description><![CDATA[Recently while debugging the codebase at my job, I had a use-case where I needed to know which method call happened where each such method call results in some SQL statement. The main focus was to know the order in which these functions were called s...]]></description><link>https://blog.lakbychance.com/you-can-log-function-names-this-way</link><guid isPermaLink="true">https://blog.lakbychance.com/you-can-log-function-names-this-way</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[hashnodebootcamp]]></category><category><![CDATA[coding]]></category><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Tue, 21 Sep 2021 12:05:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1632225684462/sfEj0Us5p.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently while debugging the codebase at my job, I had a use-case where I needed to know which method call happened where each such method call results in some SQL statement. The main focus was to know the order in which these functions were called so that I can map them to sql statements being logged in another node_module.</p>
<p>The number of unique methods that existed within the Class responsible for this were 30. Now for urgency sake, I went berserk and added 30 <strong>console.log</strong> inside each such method to log the method names. </p>
<p>But obviously, I didn't like it much. There has to be some way to achieve the same with some extra fancy code. </p>
<p><img src="https://media0.giphy.com/media/db30h5xnsXGqVNF3zY/200.gif" alt="coz i m fancy" /></p>
<p>That's where I begin my intense google search to end up with 2 implementations. </p>
<p>But before that, let's consider a simplified version of my problem :-</p>
<pre><code class="lang-js">Class OperationManager {

operation1(){}

operation2(){}

operation3(){}

operation4(){}

operation5(){}

operation6(){}

operation7(){}

operation8(){}

operation9(){}

operation10(){}

}

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">managerFactory</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> OperationManager();
}

<span class="hljs-comment">//Assume usage inside another function like so :-</span>
(<span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">manager</span>) </span>{
  manager.operation8();

  manager.operation1();

  manager.operation4();

})(managerFactory());
</code></pre>
<p>Now I want to know the name of each <strong>operationX method</strong> that is called without adding a single line of <code>console.log('operationX called')</code> line inside the function bodies. </p>
<p>Time to check those 2 implementations :-</p>
<p><img src="https://c.tenor.com/3cUNs401gkUAAAAC/hurry-up-time-is-ticking.gif" alt="its time" /></p>
<h3 id="1-injecting-logging-behaviour">1. Injecting logging behaviour</h3>
<pre><code><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">logFnCall</span>(<span class="hljs-params">func</span>) </span>{ 
  <span class="hljs-keyword">const</span> p = <span class="hljs-built_in">document</span>.createElement(<span class="hljs-string">'p'</span>);
  p.textContent = <span class="hljs-string">`<span class="hljs-subst">${func}</span> called`</span>
  <span class="hljs-built_in">document</span>.body.append(p);
}

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">inject</span>(<span class="hljs-params">obj, func</span>) </span>{
  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> key <span class="hljs-keyword">in</span> obj) {
    <span class="hljs-keyword">let</span> value = obj[key];
    <span class="hljs-keyword">if</span> (<span class="hljs-keyword">typeof</span> value === <span class="hljs-string">'function'</span>) {
      obj[key] = <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params"></span>) </span>{
        func(key, value);
        <span class="hljs-keyword">return</span> value.apply(<span class="hljs-built_in">this</span>, <span class="hljs-built_in">arguments</span>);
      }
    }
  }
}

<span class="hljs-comment">// Modify the managerFactory() implementation like so :-</span>
<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">managerFactory</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> manager = <span class="hljs-keyword">new</span> OperationManager();
  inject(manager, logFnCall);
  <span class="hljs-keyword">return</span> manager;
}
</code></pre><h4 id="explanation">Explanation :-</h4>
<p>The <code>inject</code> function is trying to loop over all the keys that are available inside the <code>obj</code> and then if the <code>value</code> obtained using that <code>key</code> is a <strong>function</strong>, we modify the <code>obj[key]</code> to accommodate our custom <code>func</code> implementation and then call the old implementation using appropriate <strong>context</strong> and <strong>arguments</strong>. 
So inside the <code>managerFactory</code> function, instead of returning new instance as it is, we create it and save it in <code>manager</code> and before returning the same, we  <strong>inject</strong> our custom <code>logFnCall</code> implementation. </p>
<p>Here is the codepen for the same :-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://codepen.io/lapstjup/pen/rNwKaxp?editors=0011">https://codepen.io/lapstjup/pen/rNwKaxp?editors=0011</a></div>
<p>Wow so nothing came up. It's totally blank !!!!!</p>
<p><img src="https://media0.giphy.com/media/ZaKcIYMjNYNf4lEuC7/200.gif" alt="blank" /></p>
<p>And there is a good reason for that:-</p>
<p>In the <code>inject</code> implementation, we have this statement - <code>let key in obj</code> which assumes that each <code>key</code> in <code>obj</code> is <strong>enumerable</strong>. But <code>obj</code> was created using a <strong>class</strong> and class methods are <strong>non-enumerable</strong>. So we just need to modify this line to help us fetch those method names. We can do it successfully by replacing the existing statement with <code>let key of Object.getOwnPropertyNames(obj.constructor.prototype)</code>. </p>
<p>Here is a working codepen :-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://codepen.io/lapstjup/pen/KKqewZv?editors=0010">https://codepen.io/lapstjup/pen/KKqewZv?editors=0010</a></div>
<h3 id="2-proxifying-logging-behaviour-i-personally-liked-this">2. Proxifying logging behaviour (I personally liked this)</h3>
<pre><code class="lang-js"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">logFnCall</span>(<span class="hljs-params">key,value</span>) </span>{ 
  <span class="hljs-keyword">if</span>(<span class="hljs-keyword">typeof</span> value===<span class="hljs-string">'function'</span> ){
  <span class="hljs-keyword">const</span> p = <span class="hljs-built_in">document</span>.createElement(<span class="hljs-string">'p'</span>);
  p.textContent = <span class="hljs-string">`<span class="hljs-subst">${key}</span> called`</span>
  <span class="hljs-built_in">document</span>.body.append(p);
  }
}

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">proxify</span>(<span class="hljs-params">obj, func</span>) </span>{
  <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Proxy</span>(obj, {
    <span class="hljs-attr">get</span>: <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">target, key</span>) </span>{
      <span class="hljs-keyword">let</span> value = target[key];
      func(key, value);
      <span class="hljs-keyword">return</span> value;
      <span class="hljs-comment">// or return Reflect.get(...arguments) </span>
    }
  })
}

<span class="hljs-comment">// Modify the managerFactory() implementation like so :-</span>
<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">managerFactory</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> manager = <span class="hljs-keyword">new</span> OperationManager();
  <span class="hljs-keyword">const</span> proxyManager = proxify(manager, logFnCall);
  <span class="hljs-keyword">return</span> proxyManager;
}
</code></pre>
<h4 id="explanation">Explanation :-</h4>
<p>The <code>proxify</code> function returns a new <code>Proxy</code> object which has a <code>get</code> trap setup. The <code>get</code> trap has <code>target</code> which is <code>obj</code> itself and <code>key</code> which can be any <code>key</code> in the <code>obj</code> itself or it's prototype and so forth. Whenever we do something like <code>manager.operation1</code>, the internal <strong>[[Get]]</strong> implementation in the JS spec gets called to return us <code>operation1</code> property on <code>manager</code>. Proxy helps us intercept that call to <strong>[[Get]]</strong> to perform any operation that we want to. And we have a good use-case here of logging the function name that's called. To do that, we pass a custom <code>func</code> which takes the <code>key</code> and <code>value</code> that's being intercepted and according to inputs performs the relevant operation. We return the <code>value</code> because we want the underlying <strong>[[Get]]</strong> behavior to remain same. We can also return <code>Reflect.get(...)</code> to get the existing behavior but not using it doesn't pose a problem here.
So inside the <code>managerFactory</code> function, instead of returning new instance as it is, we create it and save it in <code>manager</code> and instead of returning the same, we  <strong>proxify</strong> the <code>manager</code> with our custom <code>logFnCall</code> implementation and return the <code>proxyManager</code>. </p>
<p>Here is a working codepen :-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://codepen.io/lapstjup/pen/eYRKmjK?editors=0010">https://codepen.io/lapstjup/pen/eYRKmjK?editors=0010</a></div>
<p>I like the second implementation because of following reasons :-</p>
<ul>
<li>No direct mutation of method implementation like in <code>inject</code>. </li>
<li>The conditional behavior of checking if the <code>value</code> is a function is implemented inside the <code>logFnCall</code> in second approach. Gives more power to the developer to compose custom behaviors.</li>
<li>Not caring about the <strong>enumerable</strong> and <strong>non-enumerable</strong> stuff.</li>
</ul>
<p>A good resource to know more about <a target="_blank" href="https://javascript.info/proxy">Proxy</a></p>
<p>Have encountered similar situation and have an alternate proposal ? 
Go smash the comment section then 💻</p>
<h2 id="thank-you-for-your-time-d">Thank you for your time :D</h2>
]]></content:encoded></item><item><title><![CDATA[Web Performance - Reducing Paint flashing on Amazon.com]]></title><description><![CDATA[That's a mouthful title alright !!

But this was a challenge which was organized at the end of a Web Performance Bootcamp held by team devkode (founder - Sunny Puri) on JavaScript Meetup (co-organizer NC Patro) platform. If you're interested in knowi...]]></description><link>https://blog.lakbychance.com/web-performance-reducing-paint-flashing-on-amazoncom</link><guid isPermaLink="true">https://blog.lakbychance.com/web-performance-reducing-paint-flashing-on-amazoncom</guid><category><![CDATA[Web Development]]></category><category><![CDATA[CSS]]></category><category><![CDATA[HTML5]]></category><category><![CDATA[performance]]></category><category><![CDATA[Frontend Development]]></category><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Thu, 05 Aug 2021 17:30:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1628183654506/14y-sifRt.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>That's a mouthful title alright !!</p>
<p><img src="https://i.gifer.com/Btsm.gif" alt="so mouthful" /></p>
<p>But this was a challenge which was organized at the end of a <a target="_blank" href="https://www.meetup.com/JavaScriptMeetup/events/276621594/">Web Performance Bootcamp</a> held by team <a target="_blank" href="https://devkode.io/">devkode</a> (founder - <a target="_blank" href="https://www.linkedin.com/in/sunnypuri/">Sunny Puri</a>) on <strong>JavaScript Meetup</strong> (co-organizer <a target="_blank" href="https://www.linkedin.com/in/ncpatro/">NC Patro</a>) platform. If you're interested in knowing my experience of the same, then here is the <a target="_blank" href="https://www.linkedin.com/posts/lakshya-thakur_webdevelopment-bootcamps-teamdevkode-activity-6788698522888806400-coSs">link</a> but we will focus more on the challenge here.</p>
<p><strong>Problem Statement :-</strong></p>
<blockquote>
<p>When you open and close the <a target="_blank" href="https://www.amazon.com/">amazon.com</a> sidebar, <a target="_blank" href="https://developers.google.com/web/fundamentals/performance/rendering/simplify-paint-complexity-and-reduce-paint-areas">paint flashing</a> occurs which can be eliminated via CSS modification. So you're required to identify those changes and submit the same.</p>
</blockquote>
<p>If you're unfamiliar with terms like <strong>layout</strong>, <strong>paint</strong> and <strong>composition</strong>, I would recommend you to go through <a target="_blank" href="https://developers.google.com/web/fundamentals/performance/critical-rendering-path">CRP</a> link to get a know how so you can better relate with the solution ahead.</p>
<p>Alright, so how to first visualize this <strong>paint flashing</strong> stuff ?</p>
<p>To start with, let's go to amazon.com and there, we will take help of our age old friend <strong>DevTools</strong> :-</p>
<p><img src="https://i1.wp.com/css-tricks.com/wp-content/uploads/2018/02/chrome-devtools.jpg?fit=1200%2C600&amp;ssl=1" alt="devtools pic" /></p>
<p>After opening the <strong>DevTools</strong>, lets do <kbd>Ctrl</kbd>/<kbd>Command</kbd> + <kbd>Shift</kbd> + <kbd>P</kbd> to open the <strong>Command Menu</strong>. Search for <strong>Show paint flashing rectangles</strong> and select that option.</p>
<p>Now if you try to interact with the left hamburger icon used to open/close the sidebar, you will see <strong>green</strong> flashing rectangles on the screen indicating <strong>paint</strong> pipeline is triggered. And it is this <strong>paint</strong>, we aim to eliminate.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1628180955555/Il_GDPNEt.gif" alt="Amazon_default_paint_flashing.gif" /></p>
<p>Alright so now we visualize the problem and before I go on to explain the solution, you can also try arriving at the solution. Also, I want you to know that it took me <strong>1.5</strong> hours to identify the <strong>CSS modifications</strong>. You might take less or more time since that's all relative.</p>
<p>Did you try it ?</p>
<p>Yes/No ? (Drop in the comment section)</p>
<p><img src="https://telltaletv.com/wp-content/uploads/2018/05/the-good-place-okay-here-we-go-gif.gif" alt="Okay, here we go" /></p>
<p>I identified two main elements that were responsible for this <strong>paint flashing</strong> :-</p>
<ul>
<li><p>the CSS class <code>lock-position</code> on <code>body</code> tag</p>
</li>
<li><p>the sidebar element with <strong>id</strong> of <code>hmenu-container</code></p>
</li>
</ul>
<h3 id="heading-bodylock-position">body.lock-position</h3>
<p>Consider the following CSS when <em>sidebar is open</em> :-</p>
<pre><code class="lang-javascript">body.lock-position {
    <span class="hljs-attr">overflow</span>: hidden;
}
</code></pre>
<p>This is used by Amazon to prevent the scroll of page content when the sidebar is open. Now if we go to <a target="_blank" href="https://csstriggers.com/overflow-x">css triggers</a> to see the <strong>layout, paint and composition</strong> cost of the same, here is what we will find :-</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1628178843346/ZoMFr-xMo.png" alt="overflow-x-css-triggers" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1628178854157/0AjCe9pNtn.png" alt="overflow-y-css-triggers" /></p>
<p>Both <code>overflow-x</code> and <code>overflow-y</code> together combined are equivalent to <code>overflow</code> CSS property and as we can see a change in this triggers the <strong>layout</strong> pipeline making it an expensive operation.</p>
<p>If we remove, the <code>overflow:hidden</code> from <code>body.lock-position</code>, you will notice that while closing the <strong>sidebar</strong>, there is no more <strong>paint flashing</strong> but while opening there still is (Obviously, now the when the sidebar is open, a user can scroll through page content but that was not a constraint). We will eliminate the leftover <strong>paint flashing</strong> next.</p>
<p>The result till now :-</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1628181324922/M46w-rk3V.gif" alt="Amazon_body_fix_paint_flashing.gif" /></p>
<h3 id="heading-hmenu-container">#hmenu-container</h3>
<p>Consider the following CSS when <em>sidebar is closed</em> :-</p>
<pre><code class="lang-javascript">#hmenu-container {
    <span class="hljs-attr">position</span>: fixed;
    top: <span class="hljs-number">0</span>;
    left: <span class="hljs-number">0</span>;
    right: <span class="hljs-number">0</span>;
    bottom: <span class="hljs-number">0</span>;
    visibility: hidden;
    z-index: <span class="hljs-number">100000</span>;
}
</code></pre>
<p>Consider the following CSS then <em>sidebar is open</em>:-</p>
<pre><code class="lang-javascript">#hmenu-container.hmenu-visible {
    <span class="hljs-attr">visibility</span>: visible;
}
</code></pre>
<p>You will notice that <code>visibility</code> CSS property initially takes a value of <code>hidden</code> and on click of hamburger icon changes to <code>visible</code>. Now let's go to <a target="_blank" href="https://csstriggers.com/visibility">css triggers</a> to know what part of <strong>layout, paint and composition</strong> does <code>visibility</code> trigger :-</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1628178282064/XOli1xIEN.png" alt="visibility-css-trigger" /></p>
<p>Now in the bootcamp, there was a certain section where it was talked about how <code>transform</code> CSS property is used to perform UI changes on a separate layer via the <strong>compositor thread</strong> and <code>will-change</code> CSS property let's the browser know beforehand how an element is expected to change so it can promote the element to a separate layer as an optimization.</p>
<p>Let's look at how <code>transform</code> works from <a target="_blank" href="https://csstriggers.com/transform">css triggers</a>:-</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1628185346820/g0QyX-x1h_.png" alt="transform-css-triggers" /></p>
<p>Cool lets use the above for the final piece of our solution :-</p>
<pre><code class="lang-javascript">#hmenu-container {
    <span class="hljs-attr">position</span>: fixed;
    top: <span class="hljs-number">0</span>;
    left: <span class="hljs-number">0</span>;
    right: <span class="hljs-number">0</span>;
    bottom: <span class="hljs-number">0</span>;
    will-change:transform;
    transform: translateX(<span class="hljs-number">-100</span>%);
    z-index: <span class="hljs-number">100000</span>;
}

#hmenu-container.hmenu-visible {
    <span class="hljs-attr">transform</span>: translateX(<span class="hljs-number">0</span>);
}
</code></pre>
<p>To mimic <strong>sidebar being closed</strong>, we can add <code>transform: translateX(-100%);</code> to <code>#hmenu-container</code> which will shift the whole container to left by 100% of it's width.</p>
<p>And to mimic <strong>sidebar being open</strong>, we can add <code>transform: translateX(0);</code> to <code>#hmenu-container.hmenu-visible</code> which will show the whole container in current viewport.</p>
<p>Here is the final result :-</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1628181814924/pUhagrVlv.gif" alt="Amazon_final_result.gif" /></p>
<p><strong>Note:- The above will work even without</strong> <code>will-change:transform</code> since that's an additional optimization of promoting <code>#hmenu-container</code> to another layer before the sidebar is made to open.</p>
<p>And that's it for the solution. This challenge was fun because it integrated what we learned at the bootcamp. Also I think the other participants who completed the challenge might have had other approaches. So if you figure out some other way, do share the same in the comments 👇.</p>
<p>P.S. - Btw if you're wondering about those slick looking keyboard keys support in markdown, check out <a target="_blank" href="https://townhall.hashnode.com/hashnodes-editor-now-supports-8-new-html-tags-kbd-abbr-sub-sup-and-more">this</a>. Also the <strong>cover image</strong> is generated using <a target="_blank" href="https://slickr.vercel.app/app">this</a> awesome web-app by <a class="user-mention" href="https://hashnode.com/@saviomartin">Savio Martin</a>.</p>
<h2 id="heading-thank-you-for-your-time-d">Thank you for your time :D</h2>
]]></content:encoded></item><item><title><![CDATA[A wild handleEvent appeared 😮 !!!]]></title><description><![CDATA[Let's say, we have a DOM element by the name of element and we want to add event listeners to it. How would you do so ?
Here are two ways which can come into mind :-
const handleClick = () =>{console.log('You can remove me later safely')}
element.add...]]></description><link>https://blog.lakbychance.com/a-wild-handleevent-appeared</link><guid isPermaLink="true">https://blog.lakbychance.com/a-wild-handleevent-appeared</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[Frontend Development]]></category><category><![CDATA[learning]]></category><category><![CDATA[webdev]]></category><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Sat, 10 Jul 2021 14:12:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1625926122192/b5k8epAiy.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Let's say, we have a DOM element by the name of <code>element</code> and we want to add event listeners to it. How would you do so ?</p>
<p>Here are two ways which can come into mind :-</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> handleClick = <span class="hljs-function">() =&gt;</span>{<span class="hljs-built_in">console</span>.log(<span class="hljs-string">'You can remove me later safely'</span>)}
element.addEventListener(<span class="hljs-string">'click'</span>,handleClick);
element.addEventListener(<span class="hljs-string">'click'</span>,<span class="hljs-function">()=&gt;</span><span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Try and remove me noob'</span>));
</code></pre>
<p>Now when it comes to removing these event listeners, it's not possible to remove the second one since it's anonymous and for first one we can just do <code>element.removeEventListener('click',handleClick);</code></p>
<p>What if I told you there is a way and a syntax you might not be familiar with when it comes to event listeners ?</p>
<p><img src="https://i.gifer.com/D5UY.gif" alt="you're lying" /></p>
<p>Well here it is :-</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> someObj = {
<span class="hljs-attr">handleEvent</span>: <span class="hljs-function">(<span class="hljs-params">e</span>)=&gt;</span><span class="hljs-built_in">console</span>.log(<span class="hljs-string">`I am <span class="hljs-subst">${e.type}</span> event`</span>);
}

element.addEventListener(<span class="hljs-string">'click'</span>,someObj);
</code></pre>
<p>And :-</p>
<p><img src="https://media2.giphy.com/media/eIfYQTaK3148kmMCxT/giphy.gif" alt="this is fine" /></p>
<p>Jokes aside, it's <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/EventListener/handleEvent">always</a> been there. It's just less spoken about. And I came across this when I solved <a target="_blank" href="https://stackoverflow.com/questions/67671873/typescript-handleevent-definition/67672217#67672217">this</a> StackOverflow question and my mind was blowwwwnn !!! </p>
<p><img src="https://www.icegif.com/wp-content/uploads/mind-blown-icegif-3.gif" alt="mind blown" /></p>
<p>Also, You can just remove the event listener like so <code>element.removeEventListener('click',someObj);</code></p>
<p>After finding this, I thought to myself that what if I make a bare minimum <code>Handler</code> class which can abstract the <strong>registration</strong> and <strong>unregistration</strong> bit and work on the same principle ? </p>
<p>And this is how it looks :-</p>
<pre><code class="lang-js"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Handler</span> </span>{
  #element
  #eventMap = {}

  <span class="hljs-keyword">constructor</span>(element, eventMap) {
    <span class="hljs-built_in">this</span>.#element = element
    <span class="hljs-built_in">this</span>.register(eventMap)
  }

  handleEvent(e) {
    <span class="hljs-built_in">this</span>.#eventMap[e.type](e)
  }

  register(eventMap) {
    <span class="hljs-built_in">this</span>.#eventMap = { ...this.#eventMap, ...eventMap }
    <span class="hljs-built_in">Object</span>.keys(<span class="hljs-built_in">this</span>.#eventMap).forEach(<span class="hljs-function">(<span class="hljs-params">event</span>) =&gt;</span> {
      <span class="hljs-built_in">this</span>.#element.addEventListener(event, <span class="hljs-built_in">this</span>)
    })
  }

  unregister(event) {
    <span class="hljs-built_in">this</span>.#element.removeEventListener(event, <span class="hljs-built_in">this</span>)
  }

  unregisterAll() {
    <span class="hljs-built_in">Object</span>.keys(<span class="hljs-built_in">this</span>.#eventMap).forEach(<span class="hljs-function">(<span class="hljs-params">event</span>) =&gt;</span> {
      <span class="hljs-built_in">this</span>.#element.removeEventListener(event, <span class="hljs-built_in">this</span>)
    })
  }
}
</code></pre>
<p>But what made me go for a class implementation ? Well now we know that we can pass an <strong>object</strong> to <code>add/removeEventListener</code>, we can have a custom <code>Handler</code> class inside which <code>this</code> will point to the <strong>object instance</strong> and come into use. </p>
<p>Let's look at a usage sample of this code :-</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> handler = <span class="hljs-keyword">new</span> Handler(element, {
  <span class="hljs-attr">click</span>: <span class="hljs-function">()=&gt;</span><span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Yo I am clicky'</span>),
  <span class="hljs-attr">focus</span>: <span class="hljs-function">()=&gt;</span><span class="hljs-built_in">console</span>.log(<span class="hljs-string">'FOCUS!!!'</span>),
});
</code></pre>
<p>What the above does it that for <code>element</code>, it registers both the anonymous functions for respective events. And if you go further to register another function for <code>click</code> like so :-</p>
<pre><code class="lang-js">  handler.register({
    <span class="hljs-attr">click</span>: <span class="hljs-function">() =&gt;</span> <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Well I am new clicky'</span>)
  });
</code></pre>
<p>This will override the existing <code>click</code> function that we had without any worry of handling its removal and add this new anonymous function. </p>
<p>Now if you want to explicitly <code>unregister</code> the <code>click</code> function, how would you do so ?</p>
<pre><code class="lang-js">handler.unregister(<span class="hljs-string">'click'</span>);
</code></pre>
<p><img src="https://i.pinimg.com/originals/f5/8e/b2/f58eb2e7e1f40e78a36bd9a8bb1ba4e1.gif" alt="that's it" /></p>
<p>So <strong>anonymous</strong> or <strong>non-anonymous</strong>, the <code>Handler</code> class will ensure that for each <strong>event type</strong>, only one function is registered for the same element. But what if I want to register multiple functions for same <strong>event type</strong> for the same element ?</p>
<p>Well in that case, you can create another instance of <code>Handler</code> class with same <code>element</code> and let it be responsible for it. </p>
<p><img src="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcS2iOSIwsdMSZUSbtjbL-Z5QHtgKsCMqLyr0g&amp;usqp=CAU" alt="single responsibility" /></p>
<p>It's still a new concept to me and maybe I might have derived some wrong conclusions. But I will be more than happy to know more about it. Did you know this ? If so, have you used this ? Do you not prefer it ? Any bottlenecks ? Feel free to bash that comment section 💪.</p>
<p>You can go through <a target="_blank" href="https://webreflection.medium.com/dom-handleevent-a-cross-platform-standard-since-year-2000-5bf17287fd38">this</a> article for more insights into <code>handleEvent</code>. </p>
<p>Here is a codepen where you can play with this implementation :-</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://codepen.io/lapstjup/pen/ExWmBVX?editors=1111">https://codepen.io/lapstjup/pen/ExWmBVX?editors=1111</a></div>
<h3 id="heading-thank-you-for-your-time-d">Thank you for your time :D</h3>
]]></content:encoded></item><item><title><![CDATA[Implementing String repeat() function in JS]]></title><description><![CDATA[As per MDN,

The repeat() method constructs and returns a new string which contains the specified number of copies of the string on which it was called, concatenated together.

Now one might think that there is a really straightforward to implement t...]]></description><link>https://blog.lakbychance.com/implementing-string-repeat-function-in-js</link><guid isPermaLink="true">https://blog.lakbychance.com/implementing-string-repeat-function-in-js</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[algorithms]]></category><category><![CDATA[data structures]]></category><category><![CDATA[problem solving skills]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Thu, 08 Jul 2021 13:55:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1625752149308/4dH7LAZCE.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As per <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/repeat">MDN</a>,</p>
<blockquote>
<p>The <code>repeat()</code> method constructs and returns a new string which contains the specified number of copies of the string on which it was called, concatenated together.</p>
</blockquote>
<p>Now one might think that there is a really straightforward to implement this. Yes there is but if asked in an interview and you go with the straightforward way, they will be like :-</p>
<p><img src="https://i.gifer.com/OTc.gif" alt="meh" /></p>
<p>How do I know this ?</p>
<p>Because I got <strong>mehhhhd......</strong></p>
<p>So that's why we are going to see few approaches to solve it. The real optimized approach was not intuitive to me and is still something I am trying to wrap my head around. But I came up with a middle-ground approach that works better than the <strong>meh!!</strong> one. </p>
<p>And once, again we will take a synonym for <code>repeat</code>. 
<strong>Google time</strong> !!!!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1625748971593/DxiBTEwj4.png" alt="Screen Shot 2021-07-08 at 6.24.59 PM.png" /></p>
<p><code>replicate</code> sounds cool. </p>
<p><img src="https://thumbs.gfycat.com/PlainDependentChicken-max-1mb.gif" alt="replicate" /></p>
<p>Alright let's go implement <code>String.prototype.replicate</code> now :-</p>
<h3 id="the-meh-approach">The Meh Approach</h3>
<pre><code class="lang-js"><span class="hljs-built_in">String</span>.prototype.replicate = <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">count</span>) </span>{
  <span class="hljs-keyword">let</span> input = <span class="hljs-built_in">this</span>;
  <span class="hljs-keyword">let</span> result = <span class="hljs-string">""</span>;
  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> index = <span class="hljs-number">0</span>; index &lt; count; index++) {
    result += input;
  }
  <span class="hljs-keyword">return</span> result;
}
</code></pre>
<p><strong>Meh explanation :-</strong>
We initialize <code>result</code> to <code>""</code> and start a <code>for</code> loop in which we iterate till <code>count</code> and simply keep appending the <code>input</code> to the <code>result</code> variable. Very straightforward but <strong>meh!!</strong>. </p>
<p><strong>Meh benchmark :-</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1625740731371/DujtY5kUs.png" alt="Screen Shot 2021-07-08 at 4.08.08 PM.png" /></p>
<p><strong>100 % slower with 108 operations per second compared to 9202566.4 operations per second </strong>. Let me cry in the corner. </p>
<p><img src="https://sanjeetv.files.wordpress.com/2015/04/crying-lover.gif" alt="cries" /></p>
<h3 id="the-little-less-meh-approach">The Little Less Meh Approach</h3>
<pre><code class="lang-js"><span class="hljs-built_in">String</span>.prototype.replicate = <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">count</span>) </span>{
  <span class="hljs-keyword">let</span> input = <span class="hljs-built_in">this</span>;
  <span class="hljs-keyword">let</span> result = <span class="hljs-built_in">this</span>.valueOf();
  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">var</span> index = <span class="hljs-number">2</span>; index &lt; count; index*=<span class="hljs-number">2</span>) {
    result += result;
  }
  <span class="hljs-keyword">let</span> remainingCount = count - index/<span class="hljs-number">2</span>;
  <span class="hljs-keyword">return</span> remainingCount &gt; <span class="hljs-number">0</span> ? result + input.replicate(remainingCount) : result;
}
</code></pre>
<p><strong>Little Less Meh explanation :-</strong></p>
<ul>
<li>Let's consider the case of <code>'hey'.replicate(10)</code> :-<ul>
<li>We have <code>input</code> initialized to <code>this</code> and <code>result</code> initialized to <code>this.valueOf()</code>. The <code>valueOf()</code> bit helps in decreasing the implicit conversion time that's happening whenever later <code>result</code> will be concatenated to itself. </li>
<li>Now the <code>for</code> loop stuff - <ul>
<li><code>index</code> is intialized to <code>2</code>.</li>
<li><code>index</code> should be less than <code>count</code></li>
<li><code>index</code> should be multiplied each time by <code>2</code></li>
<li><code>result</code> will be appended to itself each time in the iteration:-<ul>
<li><code>result</code> for <code>index = 2</code> will become <code>heyhey</code></li>
<li><code>result</code> for <code>index = 4</code> will become <code>heyheyheyhey</code></li>
<li><code>result</code> for <code>index = 8</code> will become <code>heyheyheyheyheyheyheyhey</code></li>
<li><code>index</code> will become <code>16</code> which is greater than <code>10</code> and we exit the loop.</li>
</ul>
</li>
<li><code>remainingCount</code> will be <code>10</code> - <code>16/2</code> = <code>2</code>;</li>
<li>When <code>remainingCount</code> will be greater than <code>0</code>, we will recurse by calling <code>input.replicate(remainingCount)</code> and add its result to current <code>result</code> or simply return <code>result</code>. </li>
</ul>
</li>
</ul>
</li>
</ul>
<p><strong>Little Less Meh benchmark :-</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1625746174280/bSZMrIFyv.png" alt="Screen Shot 2021-07-08 at 5.39.19 PM.png" /></p>
<p><strong>76.79% slower with 2109699.5 operations per second compared to 9091332.85 operations per second </strong>. That's still relatively slower than the native one but way way way faster than what we had initially. </p>
<p><img src="https://media2.giphy.com/media/B1CrvUCoMxhy8/200.gif" alt="i am speed" /></p>
<p>Earlier performing the repetitions itself was <strong>O(count)</strong> but now the same is somewhere down the line of <strong>O(log(x)+log(y) +....+log(k))</strong> but not completely <strong>O(log(count))</strong>. </p>
<p>In <code>'hey'.replicate(10)</code> scenario :-</p>
<ul>
<li>First time, <strong>O(log(8))</strong> work is done and then in next recursive step <strong>O(log(2))</strong> i.e <code>O(log(8) + log(2))</code>. And if I am doing maths correct, </li>
</ul>
<p><code>log(a) + log(b) = log(ab)</code></p>
<p>That means <code>O(log(8) + log(2))</code> is <code>O(log(16))</code> which is greater than <code>O(log(10))</code>(the optimal solution). </p>
<h3 id="the-legendary-optimal-solutionhttpsstackoverflowcoma54501138130690-i-would-have-never-landed-upon-without-the-internet">The legendary optimal <a target="_blank" href="https://stackoverflow.com/a/5450113/8130690">solution</a> I would have never landed upon without the internet</h3>
<pre><code class="lang-js"><span class="hljs-built_in">String</span>.prototype.replicate = <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">count</span>) </span>{
    <span class="hljs-keyword">let</span> result = <span class="hljs-string">''</span>
    <span class="hljs-keyword">let</span> pattern = <span class="hljs-built_in">this</span>.valueOf();
    <span class="hljs-keyword">while</span> (count &gt; <span class="hljs-number">0</span>) {
        <span class="hljs-keyword">if</span> (count &amp; <span class="hljs-number">1</span>) 
            result += pattern;
        count &gt;&gt;= <span class="hljs-number">1</span>
        <span class="hljs-keyword">if</span> (count) pattern += pattern;
    }
    <span class="hljs-keyword">return</span> result;
};
</code></pre>
<p><strong> Noob explanation :-</strong>
I am still trying to understand the intuition behind this solution but I think it's to do with the fact that every number can be represented in a binary form. So let's say <code>count</code> is <strong>5</strong> then it can be represented as <code>101</code> in binary. So it's possible for us to repeat the string <code>count</code> times by just resorting to <strong>binary calculations</strong>. If we try to differentiate between <strong>4</strong> and <strong>5</strong>, we know there is an extra <strong>1</strong> in latter case. Now instead of seeing the above code as some <strong>binary work of art</strong>, replace <strong>count&amp;1</strong> by <strong>count%2!==0</strong> and <strong>count&gt;&gt;=1</strong> by <strong>count=Math.floor(count/2)</strong>. What this means is that, whenever <code>count</code> is odd, we would want to save the <code>pattern</code> till now in <code>result</code> variable. What is <code>pattern</code> ? <code>pattern</code> is repeated concatenation of itself similar to our earlier algorithm so it will always repeat in powers of 2. It's necessary to take care of the situation when <code>count</code> is <strong>not divisible by 2</strong> and store the current <code>pattern</code> in <code>result</code> as we go until <code>count</code> becomes 0. </p>
<p>Did you expect a better explanation ? I can't give it right now since I am a <strong>noob</strong> in binary land. But maybe somewhere in a parallel universe I invented this Algo and helped Brendan Eich get rid of <code>typeof null</code> -&gt; <code>object</code> 🤷‍♂️.</p>
<p><strong> Best benchmark yet :- </strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1625748199864/MpBnx5siM.png" alt="Screen Shot 2021-07-08 at 6.13.04 PM.png" /></p>
<p><strong> Still 29% slower ? WTH. But hey, I ain't competing with JavaScript engines here </strong>.</p>
<h3 id="the-bonus-mdn-polyfillhttpsdevelopermozillaorgen-usdocswebjavascriptreferenceglobalobjectsstringrepeat">The Bonus MDN <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/repeat">polyfill</a></h3>
<pre><code class="lang-js"> <span class="hljs-built_in">String</span>.prototype.replicate = <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">count</span>) </span>{
    <span class="hljs-keyword">var</span> str = <span class="hljs-string">''</span> + <span class="hljs-built_in">this</span>;
    count = +count;
    count = <span class="hljs-built_in">Math</span>.floor(count);
    <span class="hljs-keyword">if</span> (str.length == <span class="hljs-number">0</span> || count == <span class="hljs-number">0</span>)
      <span class="hljs-keyword">return</span> <span class="hljs-string">''</span>;
    <span class="hljs-keyword">var</span> maxCount = str.length * count;
    count = <span class="hljs-built_in">Math</span>.floor(<span class="hljs-built_in">Math</span>.log(count) / <span class="hljs-built_in">Math</span>.log(<span class="hljs-number">2</span>));
    <span class="hljs-keyword">while</span> (count) {
       str += str;
       count--;
    }
    str += str.substring(<span class="hljs-number">0</span>, maxCount - str.length);
    <span class="hljs-keyword">return</span> str;
  }
</code></pre>
<p>  <strong> Expected an explanation ? I don't care and you will see why 👇 </strong></p>
<p><strong> The mandatory benchmark :- </strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1625747653112/GyLHeC4Pj.png" alt="Screen Shot 2021-07-08 at 6.02.31 PM.png" /></p>
<p><strong> 99.94 % slower with 5211.6 operations per second compared to 8344361.29 operations per second </strong>. And there is definite reason why it is even slower than what I came up with. What I think is happening is that upto a <strong>power of 2</strong> which is less than <code>count</code>, we use the same ideology as in the optimal solution for concatenating and doubling length of <code>str</code> every time. But after that for the remaining length, it uses <code>substring</code> and appends that to <code>str</code> again. It's the second step of <code>substring</code> which makes it a costly operation. Though it does better than the initial <strong>Meh</strong> solution of <strong>108</strong> ops/s, it's still no where near around the best optimal solution I found online or even mine 😎.</p>
<p><strong>MDN : 0</strong>
<strong>Lakshya : 1</strong></p>
<p>JK. The site is and hopefully remains a gold mine ❤️.</p>
<p>Here are the overall benchmarks :-</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1625751335374/r-b1Q1DBf0.png" alt="Screen Shot 2021-07-08 at 7.03.55 PM.png" /></p>
<p>Have something to add on ? Feel free to </p>
<p><img src="https://media2.giphy.com/media/j538QnzOR5AkKD3LWm/giphy.gif?cid=6c09b952r6jkcrjqoo9h1m5ic7gevgcuh8vai6069r8mppf2&amp;rid=giphy.gif&amp;ct=s" alt="comment" /></p>
<h3 id="thank-you-for-your-time-d">Thank you for your time :D</h3>
]]></content:encoded></item><item><title><![CDATA[Implementing the DOM contains() method]]></title><description><![CDATA[As per MDN,

The Node.contains() method returns a Boolean value indicating whether a node is a descendant of a given node, i.e. the node itself, one of its direct children (childNodes), one of the children's direct children, and so on.

But wait, Nod...]]></description><link>https://blog.lakbychance.com/implementing-the-dom-contains-method</link><guid isPermaLink="true">https://blog.lakbychance.com/implementing-the-dom-contains-method</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[algorithms]]></category><category><![CDATA[data structures]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[Frontend Development]]></category><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Tue, 06 Jul 2021 17:06:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1625590647976/xju6WJOEe.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As per <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/Node/contains">MDN</a>,</p>
<blockquote>
<p>The <strong>Node.contains()</strong> method returns a Boolean value indicating whether a node is a descendant of a given node, i.e. the node itself, one of its direct children (<strong>childNodes</strong>), one of the children's direct children, and so on.</p>
</blockquote>
<p>But wait, <code>Node.prototype.contains(...)</code> already exists. I want another name for our custom function. Let's google synonym of <code>contains</code> coz </p>
<p><img src="https://media1.tenor.com/images/cb10e995344ad2efd30b118665968ec4/tenor.gif?itemid=13251824" alt="why not" /></p>
<h3 id="intense-googling-later">Intense googling later......</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1625579876465/RdC5aInl8h.png" alt="Screen Shot 2021-07-06 at 7.26.12 PM.png" /></p>
<p>Certainly we ain't going with <strong>swallow</strong>. I think <strong>includes</strong> would be cool as both <code>Array</code> and <code>String</code> have <strong>includes</strong> as well in their <strong>prototypes</strong>. </p>
<p><img src="https://media4.giphy.com/media/SVOVZaQynOVY5mA5WN/200.gif" alt="inclusion" /></p>
<p>Before we proceed one important thing to know is that when adding new method to <code>prototype</code> and expecting to use it like so :-</p>
<p> <code>document.includes(document.body)</code>, </p>
<p>the <strong>method</strong> should not be an <strong>arrow function</strong> so that <code>document</code> can be accessed inside the <code>includes</code> function via <code>this</code> keyword. </p>
<p>Alright then, let's implement <code>Node.prototype.includes</code> in 4 different ways :-</p>
<p><img src="https://media1.tenor.com/images/b8743c8e18e16e5b3eb2f9e93e3c2e86/tenor.gif?itemid=14057657" alt="four" /></p>
<h3 id="the-recursive-dfs">The recursive DFS</h3>
<pre><code class="lang-js"><span class="hljs-number">1</span> Node.prototype.includes = <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">node</span>)</span>{
<span class="hljs-number">2</span> <span class="hljs-keyword">const</span> currentNode = <span class="hljs-built_in">this</span>;
<span class="hljs-number">3</span>  <span class="hljs-keyword">if</span>(!currentNode)
<span class="hljs-number">4</span>   <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
<span class="hljs-number">5</span>  <span class="hljs-keyword">if</span>(currentNode===node)
<span class="hljs-number">6</span>   <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;
<span class="hljs-number">7</span>  <span class="hljs-keyword">let</span> isNodeFound = <span class="hljs-literal">false</span>;
<span class="hljs-number">8</span> <span class="hljs-keyword">for</span>(<span class="hljs-keyword">let</span> index = <span class="hljs-number">0</span>;index&lt;currentNode.childNodes.length;index++){
<span class="hljs-number">9</span>    isNodeFound = isNodeFound || currentNode.childNodes[index].includes(node);
<span class="hljs-number">10</span>   <span class="hljs-keyword">if</span>(isNodeFound) <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;
<span class="hljs-number">11</span>  }
<span class="hljs-number">12</span>  <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
<span class="hljs-number">13</span> }
</code></pre>
<p><strong>Explanation :-</strong></p>
<ul>
<li>Line <strong>2 to 4</strong> - Set <code>currentNode</code> to <code>this</code> and If <code>currentNode</code> doesn't exist, simply return <code>false</code>.</li>
<li>Line <strong>5 to 6</strong>  - if <code>currentNode</code> is equal to <code>node</code> return <code>true</code>.</li>
<li>Line <strong>7 to 13</strong> -  Initialize <code>isNodeFound</code> to <code>false</code>. Then loop over <code>childNodes</code> of the <code>currentNode</code> and on each child, call the <code>includes</code> method again to check if they include the <code>node</code> element. If they do, <code>isNodeFound</code> will ultimately become <code>true</code> since it is being <strong>Orrrrrrd</strong> with the results coming from respective <code>childNodes</code> and reassigned to itself. Once <code>isNodeFound</code> is <code>true</code>, we don't need to loop over rest of the <code>childNodes</code> of <code>currentNode</code> and exit early by returning <code>true</code> else ultimately return <code>false</code>.</li>
</ul>
<hr />
<h3 id="the-iterative-bfs">The iterative BFS</h3>
<pre><code class="lang-js"><span class="hljs-number">1</span> Node.prototype.includes = <span class="hljs-function"><span class="hljs-keyword">function</span> (<span class="hljs-params">node</span>) </span>{
<span class="hljs-number">2</span> <span class="hljs-keyword">const</span> queue = [];
<span class="hljs-number">3</span>  <span class="hljs-keyword">let</span> currentNode = <span class="hljs-built_in">this</span>;
<span class="hljs-number">4</span>  queue.push(currentNode);
<span class="hljs-number">5</span>  <span class="hljs-keyword">while</span> (queue.length) {
<span class="hljs-number">6</span>    currentNode = queue.shift();
<span class="hljs-number">7</span>    <span class="hljs-keyword">if</span> (currentNode === node) <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;
<span class="hljs-number">8</span>    <span class="hljs-keyword">if</span> (currentNode.hasChildNodes()) {
<span class="hljs-number">9</span>      queue.push(...currentNode.childNodes);
<span class="hljs-number">10</span>    }
<span class="hljs-number">11</span> }
<span class="hljs-number">12</span>  <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
<span class="hljs-number">13</span> };
</code></pre>
<p><strong>Explanation :-</strong></p>
<ul>
<li>Line <strong>2 to 4</strong> - Initialize an empty list as <code>queue</code>. Set <code>currentNode</code> to <code>this</code> and <code>push</code> (or <strong>enqueue</strong> to be <strong>specific</strong>) it. </li>
<li>Line <strong>5 to 12</strong> - While the <code>queue</code> is <strong>not empty</strong>, <strong>dequeue</strong> the <code>currentNode</code> from front of the <code>queue</code> (using <code>shift</code> here). If <code>currentNode</code> is equal to <code>node</code> then return <code>true</code>. Otherwise <strong>enqueue</strong> the <code>childNodes</code> of <code>currentNode</code> (using <code>push</code> here). Once we are out of the <code>while</code> loop, we have traversed all the <strong>nodes</strong> and can safely say that we couldn't find the <code>node</code> and return <code>false</code>.</li>
</ul>
<p><strong>Note - The above can be transformed to iterative DFS by using <code>pop</code> instead of <code>shift</code> and obviously for the sake of consistency, rename <code>queue</code> to <code>stack</code>.</strong></p>
<p><img src="https://media2.giphy.com/media/l1KVb2dUcmuGG4tby/200.gif" alt="nerd" /></p>
<p>Till now both the approaches followed the classic <strong>DS/Algo</strong> traversal with DFS and BFS. 
We are now going to see 2 more approaches which take benefit of certain properties that are specifically applicable to <strong>DOM nodes</strong>.</p>
<hr />
<h3 id="lcrs-left-child-right-sibling-formhttpsenwikipediaorgwikileft-childright-siblingbinarytree">LCRS (Left Child Right Sibling) <a target="_blank" href="https://en.wikipedia.org/wiki/Left-child_right-sibling_binary_tree">form</a></h3>
<pre><code class="lang-js"><span class="hljs-number">1</span> Node.prototype.includes = <span class="hljs-function"><span class="hljs-keyword">function</span> (<span class="hljs-params">node</span>) </span>{
<span class="hljs-number">2</span> <span class="hljs-keyword">const</span> currentNode = <span class="hljs-built_in">this</span>;
<span class="hljs-number">3</span> <span class="hljs-keyword">if</span> (!currentNode)
<span class="hljs-number">4</span>   <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
<span class="hljs-number">5</span> <span class="hljs-keyword">if</span> (currentNode === node) <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;
<span class="hljs-number">6</span> <span class="hljs-keyword">return</span> !!(currentNode.firstChild?.includes(node) || currentNode.nextSibling?.includes(node))
<span class="hljs-number">7</span> };
</code></pre>
<p> <strong>Explanation :-</strong></p>
<ul>
<li>Line <strong>2 to 5</strong> - <ul>
<li>Initialize <code>currentNode</code> to <code>this</code> and if <code>currentNode</code> doesn't exist, return <code>false</code>.</li>
<li>If <code>currentNode</code> is equal to <code>node</code> return <code>true</code></li>
</ul>
</li>
<li>Line <strong>6</strong> - <strong>DOM nodes</strong> not only have pointers to their <strong>childNodes</strong> but also to their <strong>sibling nodes</strong> as well as <strong>parent nodes</strong>. Here we are going to leverage the <strong>sibling factor</strong> for easy traversal. So, we can now check whether the <strong>current node's</strong> <code>firstChild</code> includes the <code>node</code> <strong>OR</strong> <strong>current node's</strong> <code>nextSibling</code> includes the <code>node</code>. Also notice the <code>!!</code>. That's because I have used the <code>?</code> operator due to which we can end up with <code>undefined || undefined</code>  condition or <code>false || undefined</code> condition where both evaluate to <code>undefined</code> which is a <strong>falsy</strong> value and so <code>!!</code> will ensure <code>undefined</code> coerces to <code>false</code>. </li>
</ul>
<p><img src="https://media.tenor.com/images/5e88e1dee88940e7d3733039f2e67c1f/tenor.gif" alt="sibling" /></p>
<hr />
<h3 id="using-parentnode">Using parentNode</h3>
<pre><code class="lang-js"><span class="hljs-number">1</span> Node.prototype.includes = <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">node</span>)</span>{
<span class="hljs-number">2</span> <span class="hljs-keyword">const</span> currentNode = <span class="hljs-built_in">this</span>;
<span class="hljs-number">3</span>  <span class="hljs-keyword">while</span>(node){
<span class="hljs-number">4</span>    <span class="hljs-keyword">if</span>(currentNode===node) <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;
<span class="hljs-number">5</span>    node = node.parentNode;
<span class="hljs-number">6</span>  }
<span class="hljs-number">7</span>  <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
<span class="hljs-number">8</span> }
</code></pre>
<p><strong>Explanation :-</strong></p>
<ul>
<li>Line <strong>2 to 7</strong> - Remember <strong>DOM node</strong> being so attached to it's siblings and parent ? The latter one works well for this use-case as well. While <code>node</code> exists, we check if <code>currentNode</code> is equal to <code>node</code> and if it is we return <code>true</code>, else the <code>node</code> is made to point to it's <code>parentNode</code> for further comparisons. If we exit the <code>while</code> loop, it's safe to say that the <code>node</code> isn't a contained within the <code>currentNode</code> and thus, return <code>false</code>.</li>
</ul>
<p><img src="https://media2.giphy.com/media/l41YaVemSbdw1yTTi/giphy.gif" alt="parent" /></p>
<p>And here is a working codepen with all 4 implementations. Comment the rest for any one to reflect ✨.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://codepen.io/lapstjup/pen/ZEKQZRN?editors=0011">https://codepen.io/lapstjup/pen/ZEKQZRN?editors=0011</a></div>
<p>Have more ways to implement the same ? Feel free to share your approach in the comment section 👇.</p>
<h3 id="thank-you-for-your-time-d">Thank you for your time :D</h3>
]]></content:encoded></item><item><title><![CDATA[Determine if a BST is valid or not]]></title><description><![CDATA[This article is the first one in the Random DS/Algo series. The purpose of this series is to just act as random collection of DS/Algo problems I solved so that in future I might revisit what I explained to people on the Internet 🤷‍♂️.

This is one t...]]></description><link>https://blog.lakbychance.com/determine-if-a-bst-is-valid-or-not</link><guid isPermaLink="true">https://blog.lakbychance.com/determine-if-a-bst-is-valid-or-not</guid><dc:creator><![CDATA[Lakshya Thakur]]></dc:creator><pubDate>Sat, 03 Jul 2021 14:16:21 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1625320867031/fubEbAQlk.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This article is the first one in the <strong>Random DS/Algo</strong> series. The purpose of this series is to just act as random collection of DS/Algo problems I solved so that in future I might revisit what I explained to people on the Internet 🤷‍♂️.</p>
<p><img src="https://64.media.tumblr.com/51015ec638a516f7f7d353ca198a5091/tumblr_pdbo9wBAe11xd0gvgo1_1280.gifv" alt="time travel shit" /></p>
<p>This is one those questions that I always practice before an interview.</p>
<p><img src="https://media3.giphy.com/media/47yoNBN000eFEkRDGR/200.gif" alt="i like it" /></p>
<p>The <a target="_blank" href="https://leetcode.com/problems/validate-binary-search-tree/">leetcode</a> problem statement goes like this :-</p>
<blockquote>
<p>Given the root of a binary tree, determine if it is a valid binary search tree (BST). A valid BST is defined as follows:</p>
<ul>
<li><p>The left subtree of a node contains only nodes with keys less than the node's key.</p>
</li>
<li><p>The right subtree of a node contains only nodes with keys greater than the node's key.</p>
</li>
<li><p>Both the left and right subtrees must also be binary search trees.</p>
</li>
</ul>
</blockquote>
<p>There are 3 implementations that I know which can help us validate a BST.</p>
<p><img src="https://i.pinimg.com/originals/fc/ca/fa/fccafa6ce178ac8c1499abff6483a131.gif" alt="1" /></p>
<h3 id="heading-inorder-traversal-with-extra-space">Inorder traversal with extra space</h3>
<p>One of the clean features of a BST is that if you do an <strong>inorder traversal</strong> of the same, you get the <strong>node values</strong> in a sorted order.</p>
<pre><code class="lang-js">
<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">isValidBST</span>(<span class="hljs-params">root</span>)</span>{
 <span class="hljs-keyword">const</span> arr = [];
 helper(root,arr);
 <span class="hljs-keyword">for</span>(<span class="hljs-keyword">let</span> index = <span class="hljs-number">0</span>;index&lt;arr.length<span class="hljs-number">-1</span>;index++){
        <span class="hljs-keyword">if</span>(arr[index+<span class="hljs-number">1</span>]&lt;=arr[index]){
            <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
        }
  }
    <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;
}

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">helper</span>(<span class="hljs-params">root,arr</span>)</span>{
    <span class="hljs-keyword">if</span>(!root)
        <span class="hljs-keyword">return</span>;
    helper(root.left,arr);
    arr.push(root.val);
    helper(root.right,arr);
}
</code></pre>
<p><strong>Approach breakdown :-</strong></p>
<ol>
<li><p>Initialize an empty array <code>arr</code>.</p>
</li>
<li><p>Call <code>helper(root,arr)</code> which internally does :-</p>
<ol>
<li><p>Traverse the BST in <strong>inorder</strong> fashion.</p>
</li>
<li><p>Push each <code>root.val</code> inside the <code>arr</code>.</p>
</li>
</ol>
</li>
<li><p>Then we loop over the <code>arr</code> and for any <strong>index</strong> if an element is <strong>less than or equal to</strong> previous element, then we simply return <code>false</code>. This is because elements should have been <strong>strictly increasing</strong> as per the requirements.</p>
</li>
<li><p>Otherwise, we return <code>true</code>.</p>
</li>
</ol>
<hr />
<p><img src="https://i.pinimg.com/originals/a3/d3/f5/a3d3f54c2726b58c269bc2ca382693ed.gif" alt="2" /></p>
<h3 id="heading-inorder-traversal-without-extra-space">Inorder traversal without extra space</h3>
<p>It's possible to do the above and exit early if there is an <strong>invalid</strong> BST without using extra <code>arr</code> space.</p>
<pre><code class="lang-js">
<span class="hljs-keyword">var</span> isValidBST = <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">root</span>)</span>{
    <span class="hljs-keyword">const</span> prev = helper(root,<span class="hljs-literal">null</span>);
    <span class="hljs-keyword">return</span> prev.isNotValid ? <span class="hljs-literal">false</span> : <span class="hljs-literal">true</span>;
    }

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">helper</span>(<span class="hljs-params">root,prev</span>)</span>{
    <span class="hljs-keyword">if</span>(!root)
        <span class="hljs-keyword">return</span> prev;
    prev = helper(root.left,prev);
    <span class="hljs-keyword">if</span>(prev &amp;&amp; root.val &lt;= prev.val){
        prev.isNotValid = <span class="hljs-literal">true</span>;
    }
    <span class="hljs-keyword">if</span>(prev?.isNotValid)
       <span class="hljs-keyword">return</span> prev;
    prev = root;
    prev = helper(root.right,prev);
    <span class="hljs-keyword">return</span> prev;
}
</code></pre>
<p><strong>Approach breakdown :-</strong></p>
<ol>
<li><p>Let's consider <code>helper(root,prev)</code> first (<code>prev</code> represents <strong>previous node</strong>) :-</p>
<ol>
<li><p><code>if(!root) return prev</code> - If the <code>root</code> is <code>undefined</code> , we return the <code>prev</code> element.</p>
</li>
<li><p><code>prev = helper(root.left,prev)</code> - We will first go through the <strong>left subtree</strong> for each <code>root</code> to find the <code>prev</code> element.</p>
</li>
<li><p><code>if(prev &amp;&amp; root.val &lt;= prev.val){ prev.isNotValid = true; }</code> - Once we return from the <strong>left subtree</strong> , if <code>prev</code> exists, we compare <code>root.val</code> and <code>prev.val</code> to check if current <code>root.val</code> is <strong>less than or equal to</strong> <code>prev.val</code>. If it is, we create a property on <code>prev</code> by the name of <code>isNotValid</code> and set it to <code>true</code>.</p>
</li>
<li><p><code>if(prev?.isNotValid) return prev;</code> - Next we check if this <code>prev.isNotValid</code> exists or not and if it does then we simply return <code>prev</code> to exit early and not further proceed for subsequent <strong>right subtree</strong>.</p>
</li>
<li><p><code>prev = root</code> - This is how we set the <code>prev</code> value to <code>root</code> so that for next node we can use this <code>prev</code> value for necessary comparisons.</p>
</li>
<li><p><code>prev = helper(root.right,prev);</code> - Going through the <strong>right subtree</strong> for each <code>root</code> to find the <code>prev</code> element.</p>
</li>
<li><p><code>return prev;</code> - It's essential to return the <code>prev</code> to the calling function for value to reflect.</p>
</li>
</ol>
</li>
<li><p><code>const prev = helper(root,null);</code> - Inside <code>isValidBST</code>, we get the <code>prev</code> element from <code>helper(root,null)</code>.</p>
</li>
<li><p><code>return prev.isNotValid ? false : true;</code> - If <code>prev.isNotValid</code> exists then that means the BST is invalid and we return <code>false</code> else we return <code>true</code>.</p>
</li>
</ol>
<hr />
<p><img src="https://i.pinimg.com/originals/5d/7c/c3/5d7cc314a06862b21765decac8654b35.gif" alt="3" /></p>
<h3 id="heading-utilizing-the-bst-property">Utilizing the BST property</h3>
<p>In BST we can say that each <strong>node</strong> value will be more than it's left ancestor and less than it's right ancestor for it to be valid. This is what we are going to use now :-</p>
<pre><code class="lang-js">
<span class="hljs-keyword">var</span> isValidBST = <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">root</span>)</span>{
       <span class="hljs-keyword">return</span> helper(root,-<span class="hljs-literal">Infinity</span>,<span class="hljs-literal">Infinity</span>);
   }
<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">helper</span>(<span class="hljs-params">root,leftMax,rightMax</span>)</span>{
    <span class="hljs-keyword">if</span>(!root)
        <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>;
    <span class="hljs-keyword">if</span>(root.val &gt; leftMax &amp;&amp; root.val &lt; rightMax) {
        <span class="hljs-keyword">return</span> helper(root.left,leftMax,root.val) &amp;&amp; helper(root.right,root.val,rightMax);
    }
    <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
}
</code></pre>
<p><strong>Approach breakdown :-</strong></p>
<ol>
<li><p>Let's consider <code>helper(root,prev)</code>:-</p>
<ol>
<li><p><code>if(!root) return true;</code> - If the <code>root</code> is <code>undefined</code> we can say that the BST is valid till now.</p>
</li>
<li><p><code>if(root.val &gt; leftMax &amp;&amp; root.val &lt; rightMax) { return helper(root.left,leftMax,root.val) &amp;&amp; helper(root.right,root.val,rightMax); }</code> - This is the core logic where we compare <code>root.val</code> with <code>leftMax</code> and <code>rightMax</code>. Only if <code>root.val</code> is <strong>greater than</strong> <code>leftMax</code> and <code>root.val</code> is <strong>less than</strong> <code>rightMax</code>, we can proceed further to check for corresponding <strong>left subtree</strong> and <strong>right subtree</strong> and it's required that both of the subtrees need to return <code>true</code> for the BST to be valid. In case of <strong>left subtree</strong>, <code>rightMax</code> will change to current <code>root.val</code> and in case of <strong>right subtree</strong>, <code>leftMax</code> will change to current <code>root.val</code>.</p>
</li>
<li><p>If the above condition fails, then we know it's not further required to check for any subsequent left or right subtree and simply return <code>false</code>.</p>
</li>
</ol>
</li>
<li><p>Inside <code>isValidBST</code>, we do <code>return helper(root,-Infinity,Infinity);</code> and pass <code>leftMax</code> as <code>-Infinity</code> and <code>rightMax</code> as <code>Infinity</code> as initial values for our <code>root</code> node.</p>
</li>
</ol>
<hr />
<p>Out of all the approaches the last one is really clean and I guess an interviewer might expect it. I have given interviews where the first approach was enough and interviewer didn't ask for any optimizations. But if they do, I might skip the second one and jump straight to the third one.</p>
<p>Also I have ignored the space taken by call stack due to recursion and well you never know I might update this article in the future with more approaches if i feel so</p>
<p><img src="https://i.pinimg.com/originals/cf/ea/23/cfea23861b2353e9c725a5476731b886.gif" alt="shrug" /></p>
<h2 id="heading-thank-you-for-your-time-d">Thank you for your time :D</h2>
]]></content:encoded></item></channel></rss>