📢 Gate Square Exclusive: #PUBLIC Creative Contest# Is Now Live!
Join Gate Launchpool Round 297 — PublicAI (PUBLIC) and share your post on Gate Square for a chance to win from a 4,000 $PUBLIC prize pool
🎨 Event Period
Aug 18, 2025, 10:00 – Aug 22, 2025, 16:00 (UTC)
📌 How to Participate
Post original content on Gate Square related to PublicAI (PUBLIC) or the ongoing Launchpool event
Content must be at least 100 words (analysis, tutorials, creative graphics, reviews, etc.)
Add hashtag: #PUBLIC Creative Contest#
Include screenshots of your Launchpool participation (e.g., staking record, reward
LangChain team's latest release: LangSmith, a large-scale model application development platform, allows LLM to be put into real application
Original source: deep thinking SenseAI
Tool support for large language models (LLMs) is still in its infancy. Due to the nature and dynamic nature of LLM, traditional software tools are often unable to fully meet the needs of these models.
That's where LangChain and LangSmith come in.
In this post, we'll explore the latest offering from the team that created Langchain (the most popular LLM software tool) and look at new problems LangSmith hopes to solve in the LLM stack.
**01. What is LangSmith? **
When Langchain was originally created, the goal was to lower the barrier to entry for building LLM models. While there has been some debate about the viability of Langchain as a tool, it has largely achieved this goal. After solving the prototyping problem, the next problem is to help these applications into development and ensure that it is implemented in a reliable and maintainable manner. The simple mindset is:
Langchain = Prototype
LangSmith = Application
**But what are the less relevant challenges in prototyping that need to be addressed in development? **
Reliability - It's easy to build functionality that works for simple, constrained examples, but building a consistent LLM application that satisfies most companies' requirements is actually quite difficult.
To address this, LangSmith provides new functionality around the following 5 core pillars:
One of the great values of LangSmith is the ability to perform all of these operations through a simple and intuitive user interface, which greatly lowers the barrier to entry for developers without a software background.
Many features of LLM are not intuitive from a numerical point of view, so a visual interface would be very useful. The authors found that having a well-designed user interface can actually speed up prototyping and work for users, since doing everything with just code can often be tedious.
Also, being able to visualize the process and complex command chain of an LLM system is very useful in understanding why you get a certain output. As users build more complex workflows, it can be difficult to understand exactly how queries are passed through different processes, so being able to view these processes and record historical data through a simple interface will be a premier value-added service.
**02. Who is competing with LangSmith? **
While not yet a direct competitor, it would make sense for an organization like Vercel (with its AI SDK) to roll out similar capabilities to become the go-to platform for AI builders. In the next 3-6 months, due to the huge market potential of these tools, other platforms are expected to launch similar tools.
**Currently, Vercel is more focused on LLM deployment and services, as this fits better with their historical core product, but in the long run, it makes more sense to expand the AI SDK. **
While LangSmith doesn't appear to be deeply involved in embedded technology yet, there seems to be a lot of natural intersections in this area, with differences from the many embedded providers that offer built-in UIs. Ecosystems such as LlamaIndex would benefit from this type of product development, but the question is whether they can remain differentiated in a similar problem space.
Still, it's nice to see LangSmith still wants to connect with as many tools as possible. In the published blog post, they mention integration with OpenAI s, as well as multiple fine-tuning providers, which will allow developers to export data and train directly. It seems like these types of integrations will not only bring a lot of developer praise, but over time will also serve as a lightweight barrier of protection (connecting various tools is not always easy).
**03. How did LangSmith grow? **
The author mainly wants it to be extensible. Because if LangSmith can be incorporated into other apps and services, its reach could grow exponentially. For example, allowing developers to log in with LangChain accounts and monitor their LLMs on Vercel, combined with AI SDK and deployment information, would be very valuable.
**What is needed to maintain differentiation over the long term? **
The author is very excited about LangSmith, and believes that it solves a series of real problems that developers and product builders encounter when trying to go to production. The real long-term question remains: "Is there enough content to build a long-term competitive business". **
But the author does not have a clear answer now. The general idea now is that many of LangSmith's current functions are necessary conditions for developers. Most LLM providers hope to incorporate similar functionality into their platforms in the future. But that doesn't mean LangSmith can't succeed. Just look at HashiCorp's Terraform, the glue that connects all cloud providers and solves a problem big enough to become a public company. However, LangSmith needs to continue to expand its reach in order to compete with multiple providers and an ecosystem of other tools.
References