PaperLedge

PaperLedge where research meets storytelling is a revolutionary podcast where cutting-edge research meets AI-powered storytelling. Hosted by the Ernis, whose blend of gentle reassurance, cosmic wonder, explanatory clarity, and enthusiastic charm makes complex research accessible to everyone. Each episode, Ernis transforms the latest academic papers into engaging, jargon-free audio experiences that deliver key insights in digestible formats. Whether you’re a researcher seeking interdisciplinary perspectives, a student supplementing your studies, or simply curious about scientific breakthroughs, PaperLedge has something for you.
Episodes
Episodes



Thursday Mar 20, 2025
Thursday Mar 20, 2025
Hey PaperLedge learning crew, Ernis here, ready to dive into some brain-tickling research! Today, we’re tackling a fascinating study about how well Large Language Models, or LLMs – think of them as super-smart text-generating machines like the ones powering chatbots – actually reason when faced with increasingly complex problems. It's like testing if a star quarterback can still make good decisions under immense pressure!
These LLMs are getting incredibly good at spitting out text that sounds human, and recent improvements have made them seem even better at reasoning. But the big question is: how well does their reasoning hold up as problems get really hard?
To find out, the researchers used a clever approach. They used a puzzle called "Tents." Imagine a grid where you need to place tents next to trees, following specific rules. The neat thing about Tents is that you can make the puzzle as big and complex as you want, and there's a known, efficient way to solve it – a sort of linear-time solution. Think of it like a recipe: you know exactly how many steps it'll take to bake a cake, no matter how big the cake is.
So, the researchers fed increasingly larger and more complex Tents puzzles to these LLMs and watched how hard they "worked" to solve them. They measured this "reasoning effort" – basically, how much computational power the LLM used and how long it took to arrive at an answer.
Here's where it gets interesting. The researchers found that as the puzzles got harder, the LLMs' reasoning effort did increase... but only up to a point! After a certain level of complexity, the LLMs' effort stopped increasing, and in some cases, even decreased! It's like the quarterback freezing up under pressure!
"This observation highlights a critical limitation in the logical coherence of current LLMs as problem complexity increases..."
This is a big deal. It suggests that current LLMs have a limit to how logically coherent they can be when faced with super-complex problems. They might seem smart, but their reasoning power doesn't scale indefinitely. This means we need to find ways to improve their reasoning abilities so they can handle even the most challenging tasks.
Why does this matter to you?
For the AI enthusiasts: This research points to a critical bottleneck in current LLM architecture. We need new innovations to overcome these limitations.
For the everyday user: This tells us that even the smartest chatbots aren't perfect. Don't blindly trust everything they say, especially when dealing with complex or critical information.
For anyone interested in the future of work: As we increasingly rely on AI for decision-making, understanding these limitations is crucial. We need to be aware of when AI can be trusted and when human oversight is essential.
The study also revealed that different LLMs performed significantly differently on these complex puzzles. Some models were much better at handling the increasing complexity than others.
So, what are some questions that come to mind after hearing this research?
Could the way we train these LLMs be contributing to this "reasoning ceiling"? What if we trained them specifically to handle more complex logical problems?
Are there specific types of logical problems that LLMs struggle with more than others? Can we identify these weaknesses and develop targeted solutions?
How can we design more effective ways to measure the "reasoning effort" of LLMs? Are there other metrics we should be considering beyond computational power and time?
That's the gist of it, learning crew! A fascinating look at the limitations of even the most advanced AI and a call to action to push the boundaries of logical reasoning in machines. Until next time, keep those gears turning!Credit to Paper authors: Benjamin Estermann, Roger Wattenhofer



Thursday Mar 20, 2025
Thursday Mar 20, 2025
Hey Learning Crew, Ernis here, ready to dive into another fascinating piece of research from the PaperLedge archives! Today, we're tackling a paper that's all about making big language models, like the ones powering your favorite chatbots, a whole lot smarter and more efficient.
Now, these language models are massive – think of them as a giant brain with billions of connections. Traditionally, if you wanted to teach them something new, you'd have to tweak everything, which is like rebuilding an entire city just to change one street sign. This paper explores a smarter way: model editing.
What is Model Editing? It's like pinpointing exactly which parts of that giant brain are responsible for a specific task and only adjusting those parts. Imagine your car's engine: if the problem is the fuel injector, you don't replace the whole engine, right? You just fix the injector. Model editing does the same for language models.
This particular research focuses on using model editing to improve aspect-based sentiment classification. Sounds complicated, but it's actually something we do every day. Think about reading a restaurant review. You don't just want to know if the reviewer liked it overall; you want to know what they thought about the food, the service, and the atmosphere. That's aspect-based sentiment analysis – figuring out the sentiment (positive, negative, or neutral) towards specific aspects (food, service, atmosphere) of a product or service.
The researchers used a clever technique called causal intervention to figure out which "neurons," or connections, inside the language model were most important for understanding the sentiment of different aspects. They essentially "turned off" different parts of the model to see what would happen. It's like pulling different wires in a machine to see which one causes a specific function to stop working.
"Our findings reveal that a distinct set of mid-layer representations is essential for detecting the sentiment polarity of given aspect words."
The big discovery? It turns out that a specific group of neurons in the middle layers of the model are crucial for detecting the sentiment of those aspect words. By focusing their editing efforts on only these critical neurons, the researchers were able to teach the model to be better at aspect-based sentiment classification, but using far fewer resources than typical fine-tuning.
Think of it like this: instead of training the entire model on a new dataset, they're just giving a targeted "booster shot" to the specific neurons that need it. This makes the process significantly faster and more efficient.
So, why does this matter? Well, for a few reasons:
For developers, this means building smarter, more efficient AI systems with less computational power. They can adapt large language models for specialized tasks without breaking the bank.
For businesses, this could lead to better customer service chatbots, more accurate product reviews, and a deeper understanding of customer opinions.
For everyone, this research pushes the boundaries of what's possible with AI, making these powerful tools more accessible and adaptable to a wider range of applications.
The researchers demonstrated that their model editing approach achieved results that were just as good, or even better, than existing methods, but with a fraction of the trainable parameters. This is a huge step forward in making AI more sustainable and accessible.
Here are a couple of things that popped into my head while reading this:
If we can pinpoint the neurons responsible for specific tasks, could we eventually "transplant" those skills from one model to another?
What are the ethical implications of precisely controlling and modifying the behavior of AI models? Could this be used to manipulate or bias these systems?
That's all for today's deep dive! Hopefully, this has shed some light on the exciting world of model editing and its potential to revolutionize the way we interact with AI. Until next time, keep learning, keep questioning, and keep exploring!Credit to Paper authors: Shichen Li, Zhongqing Wang, Zheyu Zhao, Yue Zhang, Peifeng Li



Thursday Mar 20, 2025
Computation and Language - How much do LLMs learn from negative examples?
Thursday Mar 20, 2025
Thursday Mar 20, 2025
Alright learning crew, Ernis here, ready to dive into some fascinating research about how we teach AI to be, well, less wrong! We're talking about Large Language Models – think of them as super-smart parrots that can string together sentences in amazing ways, like ChatGPT or Bard.
These models learn in stages, kind of like going to school. First, they're just exposed to tons of text – that's the unsupervised pre-training. It's like letting them wander around a library and soak everything up.
Then comes supervised fine-tuning, where they get direct instruction: "Here's a question, here's the right answer." But what about learning from mistakes?
That's where this paper comes in. It looks at the final phase of training, where these models are shown negative examples - incorrect answers, rejected responses, the AI equivalent of a big, red "X". Think of it like teaching a dog to sit. You don't just reward the "sit," you also correct the "stand" or "lie down" at the wrong time.
The researchers used a clever technique called "Likra" to carefully control how much influence these negative examples had. Imagine Likra as a volume knob for "wrongness." They wanted to see what happens when you turn it up or down.
They focused on multiple-choice questions, which provides a clear way to define "right" and "wrong." What they found was really interesting:
Negative examples can be super-effective. At a certain point in training, showing the AI what not to do led to a much bigger jump in performance than just showing it more correct answers. It's like suddenly the AI "gets it" in a way it didn't before.
Not all wrong answers are created equal. The most helpful negative examples were the ones that were plausible but incorrect – the "near misses." These are the tricky ones, the answers that sound good but are subtly wrong. Correcting these really helps the AI sharpen its understanding. Think of it like learning to play chess: it's not enough to know the basic moves, you need to learn how to avoid common traps and blunders.
Negative examples help squash those hallucinations. Showing the model wrong answers helps it learn to more accurately identify those tricky, plausible-sounding but ultimately incorrect responses. The researchers found that while positive examples alone didn't do much to reduce the likelihood of these "hallucinations" (when the AI confidently makes stuff up), negative examples were much more effective.
So, why does this matter? Well, for a few reasons:
For developers: This research offers a powerful new tool to make our AI models more accurate and reliable.
For users: This could lead to AI assistants that are less likely to give you wrong information, making them more trustworthy.
For society: In areas like medicine or law, where accuracy is critical, this kind of improvement could be a game-changer.
This research suggests that showing AI what not to do is just as important as showing it what to do. It's about teaching these models to not just memorize, but to truly understand.
Here are a couple of things that popped into my head while prepping this:
If negative examples are so powerful, how do we ensure they're not biased or misleading? What guardrails do we need to put in place?
Could this approach of using "near miss" negative examples be applied to other machine learning tasks, beyond language models? Think self-driving cars - can we teach them to avoid accidents by showing them examples of near-collisions?
Alright learning crew, that’s the tea on negative examples in LLMs. Let me know what you think!Credit to Paper authors: Shadi Hamdan, Deniz Yuret



Thursday Mar 20, 2025
Thursday Mar 20, 2025
Hey PaperLedge crew, Ernis here, ready to dive into another fascinating piece of research! Today, we're tackling a paper that challenges a core assumption about how language models, like the ones powering your favorite chatbots and translation apps, actually work. Think of it like this: we've always believed the fancy engine is what makes a race car win, but what if someone told you the tires were just as, or even more, important?
This paper focuses on something called the attention mechanism within Transformer models. Transformers are the powerhouse behind most modern language AI. The attention mechanism is usually described as the secret sauce. It helps the model understand the context of words in a sentence by figuring out which words are most related to each other. Imagine you're reading a sentence about a "bank." Is it a river bank or a financial institution? The attention mechanism is supposed to help the AI figure that out based on the surrounding words.
The researchers behind this paper, however, decided to question just how crucial this "attention" is. Their argument is that perhaps it's not as important as we all thought.
Now, here's where it gets interesting. They came up with a clever method called PAPA (it stands for something technical, but let's just call it "Plain Average Processing of Attention"). Essentially, PAPA replaces the normal attention mechanism, which changes based on the input, with a fixed, average attention pattern. It's like replacing a sophisticated GPS that calculates the best route in real-time with a pre-programmed map that always takes the same roads.
So, they took these powerful, pre-trained Transformer models and essentially lobotomized part of their brains – replacing the dynamic, input-dependent attention with this static, average attention. Then, they put these models to work on six different tasks to see how they’d perform.
And guess what? The models still performed surprisingly well! They only saw an average performance drop of about 8%. That's like saying your race car only lost 8% of its speed when you swapped out the fancy engine part with something way simpler!
"We find that without any input-dependent attention, all models achieve competitive performance."
But here's the real kicker: the better the original model, the more it suffered from this PAPA treatment. The researchers suggest this implies that the models which are performing better, are also utilizing their input-dependent attention more. It also suggests that there is room to improve the mechanism even more.
What does this all mean? Well, the researchers argue that we might be overemphasizing the importance of input-dependent attention. Maybe there are simpler, more efficient ways to achieve similar results. Or perhaps we need to figure out how to better utilize attention mechanism in the Transformer Architecture to gain the full benefit of it.
Here's a quick summary of what we learned:
The paper challenges the idea that the attention mechanism is the be-all and end-all of Transformer models.
They replaced input-dependent attention with a static average and the models still performed well.
Better models suffered more from this replacement, suggesting attention utilization might be key.
So, why should you care about this research? Well, if you're an AI researcher, it suggests new avenues to explore for building more efficient and effective language models. If you're a business using AI, it hints that you might be able to achieve similar results with less computationally expensive models, saving you money and energy. And if you're just a curious mind, it's a reminder that even well-established ideas in science are always open to questioning and refinement.
Now, this research raises some interesting questions. What if we could identify exactly which situations require the full power of input-dependent attention and which don't? Could we then dynamically switch between different attention mechanisms to optimize performance and efficiency? And, perhaps more fundamentally, does this research suggest that our current understanding of how Transformer models "understand" language is incomplete?
That's all for this episode. Keep learning, keep questioning, and I'll catch you on the next PaperLedge!Credit to Paper authors: Michael Hassid, Hao Peng, Daniel Rotem, Jungo Kasai, Ivan Montero, Noah A. Smith, Roy Schwartz



Thursday Mar 20, 2025
Computer Vision - Improving LLM Video Understanding with 16 Frames Per Second
Thursday Mar 20, 2025
Thursday Mar 20, 2025
Alright learning crew, Ernis here, ready to dive into another fascinating paper! Today, we're talking video understanding, and it's all about how computers "see" videos – and how they can see them better.
So, you know how our eyes don't see the world as a series of snapshots? It's a continuous, flowing experience, right? Well, traditionally, when we teach computers to "watch" videos, they're basically given a slideshow – maybe just one or two pictures per second. That's like trying to understand a basketball game by only seeing a couple of blurry photos! You’re gonna miss all the action!
That low frame rate leads to critical visual information loss.
That's where this paper comes in. These researchers realized that current video understanding models are missing a ton of information because they're only looking at a few frames per second (FPS). They've created something called F-16, and it's all about cranking up the frame rate.
Think of it like this: imagine you're trying to learn how to bake a cake. If you only see a picture of the ingredients and a picture of the finished cake, you're missing all the important steps in between! But if you watch a video showing every step – mixing, stirring, baking – you get a much clearer understanding. That's what F-16 does for video understanding.
F-16 ups the frame rate to a whopping 16 frames per second! That's like watching a much smoother, more detailed version of the video. Now, you might be thinking, "Won't that be a massive amount of data?" And you'd be right! That's why they also developed a clever way to compress the visual information within each second, so the model can handle all that extra detail without getting overwhelmed.
The results? Amazing! They found that by using this higher frame rate, F-16 significantly improved video understanding across the board. It performed better on general video understanding tasks and on more specific, detailed tasks. We're talking about things like accurately analyzing what's happening in a fast-paced sports game like basketball or gymnastics. Apparently, it even out-performed some of the big name models like GPT-4o and Gemini 1.5 Pro!
But here's the really cool part. They also came up with a new decoding method that allows F-16 to run efficiently even at lower frame rates, without having to retrain the entire model. It's like having a super-powered engine that can still purr along nicely when you don't need all that horsepower.
So, why does this matter? Well, for anyone working on AI-powered video analysis, this is a game-changer. Imagine using this technology for:
Self-driving cars: Seeing and reacting to rapidly changing traffic situations with more precision.
Medical imaging: Analyzing videos of surgical procedures with greater accuracy to improve outcomes.
Sports analytics: Providing deeper insights into athletic performance and strategy.
Security and surveillance: Detecting suspicious activities in real-time with greater reliability.
This research shows us that sometimes, the simplest ideas – like paying closer attention to the details – can have a huge impact. It's not always about building bigger and more complex models; sometimes, it's about making the most of the information we already have.
And best of all? They’re planning on releasing the code, model, and data, meaning the whole learning crew will be able to play around with it.
Here are a few things I’m wondering about:
How does F-16’s performance change when dealing with different types of video quality or lighting conditions?
What are the potential ethical considerations of using high-frame-rate video analysis in surveillance or other sensitive applications?
Exciting stuff, right? I can't wait to see what you all think! Let me know your thoughts in the comments!Credit to Paper authors: Yixuan Li, Changli Tang, Jimin Zhuang, Yudong Yang, Guangzhi Sun, Wei Li, Zejun Ma, Chao Zhang



Wednesday Mar 19, 2025
Wednesday Mar 19, 2025
Alright learning crew, Ernis here, ready to dive into some mind-bending research! Today, we're tackling a paper that challenges how Large Language Models, or LLMs, learn to understand and answer our questions.
So, picture this: LLMs, like the ones powering your favorite chatbots, usually read and process text from left to right, just like we do. Think of it as reading a sentence word by word, building understanding as you go. The paper calls this "left-to-right autoregressive factorization", but we can just call it the "normal" way of reading.
But what if...what if there's a better way? What if reading backwards could unlock hidden potential? That's exactly what these researchers explored!
They investigated training LLMs to read from right to left (R2L). They used multiple-choice questions (MCQs) as their testing ground. Think of it like this: MCQs are a great way to see if a model truly understands something, or if it's just good at predicting the next word based on what it's already seen.
Now, the results are pretty fascinating. Across different sizes of models (from 2 billion to 8 billion parameters – these are big brains!), the researchers found that R2L models actually outperformed the regular L2R models on several tricky MCQ benchmarks. We're talking about questions that test:
Logical reasoning: Can the model deduce the correct answer based on the information given?
Commonsense understanding: Does the model understand basic facts about the world?
Truthfulness assessment: Can the model tell what's true from what's false?
"Our work demonstrates that exploring alternative factorizations of the text distribution can lead to improvements in LLM capabilities..."
Why is this happening? Well, the researchers dug deep. They believe the performance boost is linked to a few key factors:
Calibration: R2L models might be better at knowing when they don't know something. Think of it like being more honest about your confidence level.
Computability: Maybe some problems are just easier to solve when approached from the opposite direction. Imagine trying to untangle a knot – sometimes, starting from the end makes all the difference.
Directional conditional entropy: Okay, this one's a mouthful! But basically, it means that the amount of new information you get from a word can change depending on which direction you're reading.
To understand these factors better, they even created controlled experiments using arithmetic tasks! This allowed them to isolate and tweak each factor to see how it impacted performance.
So, why does all this matter? Well, for starters, it challenges our assumptions about how LLMs should learn. It suggests that there's no one-size-fits-all approach, and that different tasks might benefit from different learning strategies. For those working on improving AI, this opens up exciting new avenues to explore.
But even if you're not a researcher, this has implications. Think about how LLMs are being used in everything from customer service to education. If we can make them better at understanding and reasoning, we can unlock even more potential. Imagine a chatbot that's not just helpful, but also insightful and truly understands your needs.
Here are a few questions that popped into my mind:
Could we combine L2R and R2L approaches for even better results? Maybe a model that reads in both directions simultaneously?
Are there specific types of questions or tasks where R2L learning is particularly advantageous?
Does this research suggest something about how humans process information? Do we sometimes "read backwards" in our own minds to solve problems?
That's all for today, learning crew! Keep those questions coming, and I'll catch you on the next episode of PaperLedge!Credit to Paper authors: Yizhe Zhang, Richard Bai, Zijin Gu, Ruixiang Zhang, Jiatao Gu, Emmanuel Abbe, Samy Bengio, Navdeep Jaitly



Wednesday Mar 19, 2025
Wednesday Mar 19, 2025
Alright learning crew, Ernis here, ready to dive into some fascinating research that could revolutionize how we discover new drugs! Today, we're talking about a paper that's tackled the challenge of designing molecules from the ground up, atom by atom. Think of it like building with LEGOs, but instead of plastic bricks, we're using the very building blocks of matter to create potential medicines.
The core idea revolves around something called Generative Flow Networks, or GFlowNets for short. Now, that sounds intimidating, but stick with me! Imagine you're trying to find the best hiking trail. You could wander aimlessly, or you could use a map that highlights trails with amazing views (the “rewards”). GFlowNets are like that map, guiding us to create molecules that have desired properties, like being effective against a disease or being easily absorbed by the body.
Previous attempts at this have used pre-made chunks of molecules, like using pre-built walls instead of individual LEGO bricks. This limits what you can create. This paper introduces Atomic GFlowNets, or A-GFNs. The A stands for atomic and signifies that instead of starting with pre-built molecular fragments, they start with individual atoms!
So, how do they know where to start? That's where the clever bit comes in: unsupervised pre-training. They basically show the A-GFN a huge collection of existing drug-like molecules and teach it what makes a good drug. It's like showing a budding chef thousands of recipes before they start experimenting. The A-GFN learns to predict things like how “drug-like” a molecule is, how well it can interact with cells, and how easy it is to actually make in a lab. These are called molecular descriptors.
To make it even better, they then use goal-conditioned finetuning. Imagine telling our chef, "Okay, now create a dish that's specifically low in sodium and high in protein." The A-GFN can then fine-tune its molecule-building skills to target specific properties we're looking for in a drug. Think of it like teaching the AI to optimize for specific outcomes.
The researchers trained their A-GFN on a big dataset of molecules and then tested it against other methods. They showed that their approach was really good at generating novel, drug-like molecules with the desired properties.
"This research opens up exciting possibilities for discovering new drugs by exploring a much wider range of chemical structures than previously possible."
Why does this matter?
For researchers: This provides a powerful new tool for drug discovery, potentially speeding up the process and leading to more effective treatments.
For the average listener: This could mean new and better medicines being developed faster, impacting everything from cancer treatment to pain management.
This research is a big step forward in using AI to design molecules from scratch. By teaching the AI the fundamental rules of chemistry and then letting it explore the possibilities, we can potentially unlock a whole new world of medicines.
Here are a few questions that popped into my head:
Could this technology be used to design molecules for other applications besides medicine, like new materials or more efficient batteries?
How do we ensure that the AI is designing molecules that are safe and don't have unintended side effects?
What are the ethical considerations of using AI in drug discovery, and how do we ensure that these technologies are used responsibly?
That's all for today, learning crew! I hope you found that as fascinating as I did. Until next time, keep exploring!Credit to Paper authors: Mohit Pandey, Gopeshh Subbaraj, Artem Cherkasov, Emmanuel Bengio



Tuesday Mar 18, 2025
Artificial Intelligence - Multi-Agent Collaboration Mechanisms A Survey of LLMs
Tuesday Mar 18, 2025
Tuesday Mar 18, 2025
Hey PaperLedge learning crew, Ernis here, ready to dive into another fascinating piece of research! Today, we're cracking open a paper that's all about how AI is learning to play well with others. Think of it as less "lone wolf" AI and more "Avengers" – a team of AI agents working together to tackle some seriously complex problems.
The paper focuses on something called LLM-based Multi-Agent Systems (MASs). Now, that's a mouthful, but let's break it down. LLM stands for Large Language Model – basically, the brains behind AI like ChatGPT. So, we're talking about AI powered by these powerful language models. And "Multi-Agent System" just means a group of these AIs working together.
Imagine you're trying to plan a surprise birthday party. One AI could be in charge of finding the perfect venue, another could handle the guest list and invitations, and a third could coordinate the catering. Each AI has its own specialty, and they all communicate and collaborate to achieve a common goal – a successful surprise party!
This paper gives us a framework for understanding how these AI teams collaborate. They break it down into a few key areas:
Who's involved (Actors): Which AI agents are part of the team?
How they interact (Types): Are they cooperating, competing, or maybe a mix of both – what they call "coopetition"? Think of rival companies collaborating on a standard for a new technology.
How they're organized (Structures): Is there a leader AI calling the shots, or is it a more democratic, peer-to-peer setup?
Their game plan (Strategies): Are they following pre-defined roles, or are they adapting their approach based on the situation?
The rules of engagement (Coordination Protocols): How do they communicate and make decisions together?
The researchers looked at a bunch of existing AI systems and used this framework to understand how they work. It's like having a cheat sheet for understanding the dynamics of AI teams!
So why should you care about this? Well, these Multi-Agent Systems are popping up everywhere! The paper highlights examples like:
Next-gen Wireless Networks (5G/6G): Imagine AI agents optimizing network traffic in real-time to give you the fastest possible download speeds.
Industry 5.0: Think smart factories where AI agents coordinate robots and humans to create personalized products efficiently.
Question Answering: Instead of just one AI trying to answer a complex question, a team of AIs could break it down and pool their knowledge for a more comprehensive answer.
Social and Cultural Settings: Even things like AI agents collaborating to preserve and promote cultural heritage!
The possibilities are endless!
The big takeaway is that moving from single, isolated AI models to these collaborative Multi-Agent Systems is a huge step towards creating truly intelligent and effective solutions for real-world problems.
"This research is a foundation for demystifying and advancing LLM-based MASs toward more intelligent and collaborative solutions."
But it's not all smooth sailing. The paper also points out some challenges and areas for future research. For example, how do we ensure that these AI teams are fair and unbiased? How do we prevent them from being manipulated? And how do we build trust between humans and these increasingly complex AI systems?
These are crucial questions as we move towards a future where AI is increasingly integrated into our lives.
So, what are your thoughts, learning crew? Here are a couple of things that popped into my head:
If we have AI agents specializing in different areas, how do we prevent them from becoming too siloed and losing sight of the bigger picture?
Could these collaborative AI systems eventually develop their own form of "collective intelligence" that surpasses human capabilities?
Let me know what you think in the comments! Until next time, keep learning and keep questioning!Credit to Paper authors: Khanh-Tung Tran, Dung Dao, Minh-Duong Nguyen, Quoc-Viet Pham, Barry O'Sullivan, Hoang D. Nguyen







