Discuss ideas, submit feedback, and track what's next
Hello Zendesk Community,We are writing to request a crucial feature for better management of our CSAT metrics and internal reporting: the ability to customize the expiration date of the custom CSAT survey link.We appreciate that the expiration period was recently extended to 28 days. However, having this long, fixed period without the option to set our own timeframe is significantly disrupting our internal controls and reporting cycles. 🎯 The Request:We urgently request that Zendesk allow customers to edit and configure the number of days the custom CSAT link remains active after the ticket is solved/closed. 📉 Why the 28-Day Period is Problematic:A 28-day validity period drastically alters our feedback collection and reporting alignment: Reporting Distortion: Our reporting cycles (e.g., weekly, monthly) are much shorter. Responses coming in up to 28 days later skew the metrics for the period when the service interaction actually occurred, making it harder to link performance directly to agent action. Relevance: Feedback is most valuable immediately after service. Extended periods dilute the relevance and accuracy of the customer’s memory of the interaction. Operational Alignment: We need the flexibility to align the CSAT eligibility window with our internal definition of a closed cycle, which is often much shorter than four weeks. In summary, we need to choose the expiration period (e.g., 7 days, 14 days) to ensure our CSAT data is timely, accurate, and aligned with our internal business metrics.Thank you for considering this critical functionality.
Hi Team, we have a couple of suggestions for the generative search feature in the Knowledge product that we hope will improve the experience for admins.{{generative_answers}} placement validation When using a custom Zendesk theme, enabling Quick Answers requires manually inserting {{generative_answers}} into search_results.hbs. While the system correctly flags some incompatible placements with an error, it does not consistently catch each placement — allowing the template to be saved and published with no warning while feature silently stops working. This primarily affects admins responsible for theme configuration. The system should consistently validate all placements of {{generative_answers}} in search_results.hbs and prevent saving or publishing when an incompatible location is detected. Generative search usage Usage tracking for Generative Search is currently available in Admin Center under Account > Usage > Summary. However, the data is aggregated at the account level and lacks the granularity needed for multi-brand instances. This affects admins and stakeholders who need per-brand visibility to make informed decisions about feature adoption, capacity and performance. Without brand-level usage breakdowns, it is not possible to understand how different brands within a multi-brand instance are consuming the feature, making it harder to attribute usage, optimise content, and plan for scaling. The ideal solution would be to add a detailed usage table within the Generative Search section showing monthly consumption broken down by brand, covering at least the trailing 12 months broken down by month — enabling admins to compare brand performance, and make data-driven decisions over time. Looking forward to see these improvements in a future update. Thank you.
Hi Team,Following the recent release of the Spelling and Grammar Checker, we'd like to request a shared/global dictionary feature. Currently, the dictionary appears to be user-specific, which creates challenges for our team working in a specialized industry with distinct terminology. This affects all agents and new team members across our organization.Without a shared dictionary, industry-specific terms are flagged as errors for every new user, creating friction and inconsistency. This prevents the team from benefiting from established vocabulary without manual, individual setup.We currently have no effective workaround — each user must individually add specialized terms to their own dictionary, resulting in duplicated effort across the team.We would like the ability for a dedicated member to maintain a shared or global dictionary that is automatically applied across all users. Ideally, this dedicated member (can be Admins or other members with role permissions) could add, edit, or remove terms centrally, ensuring the entire team works from the same validated vocabulary from day one.
It is great to see you have enabled Slack workspace to be connected to multiple Zendesk accounts Announcing multi-instance support for the Slack integration – Zendesk help. This is a great addition. However, my company uses Microsoft Teams, and we have 5 separate instances. I am sure there are Microsoft users out there like me that would love the same functionality. Any chance you can enable the same for us?
Hello,Following our evaluation of the Version Management feature (Environment Configurations) after its General Availability announcement, we would like to share some product feedback based on our sandbox testing. We hope these observations are useful as you continue to develop and refine the feature.1. Snapshot language limitation & Production to Production environments restore restrictionWe encountered two notable limitations around Snapshots:Language dependency - The language of a Snapshot is tied to the account language at the time it is taken. When attempting to deploy from a Snapshot created in one language (for example Portuguese) to an environment set to a different language (to English), the deployment fails. There is no warning or indication of this at the point of taking the Snapshot or initiating the deployment, which makes troubleshooting unnecessarily difficult. We would suggest either making Snapshots language-agnostic, or surfacing a clear warning when a language mismatch is detected before deployment begins.Production to Production restore restriction - For plans below Enterprise (which do not include a Sandbox), the ability to restore a Snapshot back to Production is directly tied to the existence of a Sandbox environment. This feels like a significant limitation, as it restricts access to a core recovery capability based on plan tier. We would welcome a review of this dependency, or at minimum clearer documentation around it.2. Lack of detailed error information when deployments failWhen a deployment test fails, the feature currently notifies the user that the test was unsuccessful but does not provide specific details on why individual items failed. Similarly, Partially Deployed statuses indicate what was not pushed, but not the reason behind it.For a feature that manages configuration changes across environments, actionable error information is essential. Without it, admins are left guessing, which increases the risk of repeated failures or incorrect workarounds. We would strongly encourage more granular and descriptive error outputs - ideally identifying the specific configuration item, the reason for failure, and a suggested resolution where possible.3. No indication of who triggered a deploymentCurrently, there is no visible record within the Version Management feature of which admin initiated a specific deployment. While the Audit Log may capture some changes, this is not surfaced directly within the Deployments view itself.For accountability and governance purposes - particularly in environments with multiple admins - knowing who triggered a deployment is important. We would suggest adding an "Initiated by" field to each Deployment record, visible directly within the Version Management interface.We believe Version Management has strong potential and are encouraged by the direction of this feature.We hope this feedback helps prioritize improvements that would make it more robust and trustworthy for production usage.
Zendesk offers the option to display Device information for users that open a chat. This information can be useful in a number of ways, including troubleshooting issues for specific devices.Unfortunately, the option to export this data in any way is currently not possible, and one can only view this information manually by reviewing each ticket. Even exporting the tickets in bulk in Excel or JSON does not provide this information, even though things like IP and country are included.Having the option to view the % of devices your product is being accessed from, can help locate issues as the code for mobile and desktop versions is usually different.We recently had an update to one of our scripts which led to the chat being unavailable to mobile users, and we would have liked to see how this affected us in numbers by filtering the Device information in Explore. Manually reviewing a ticket showed us that specific devices could not open it, but having this information in bulk would be so much more helpful.I believe that the Device filter should be available in any of the Chat or Support datasets as this information is already available on the ticket and it shouldn't be too much to ask to have this available as bulk data.
Hi Team,We recently encountered a limitation that has impacted our operations: it is currently not possible to share Prebuilt Live Dashboards via external links. As noted in your documentation, "you cannot share the prebuilt live dashboards as an external link to users outside of your Zendesk account." The dashboards we are referring to are "Live Data (including Chat)" and "Live Data (including Messaging)" but not limited only to them. This functionality is critical to how we monitor our day-to-day operations. Without it, our teams have been forced to rely on workarounds that are unreliable, time-consuming, and fall short of our operational needs.We would suggest to enable external link sharing for these dashboards. Doing so would allow our teams and key stakeholders to maintain real-time visibility into current operations, and make timely, informed decisions around staffing and strategy.We hope to see this in a future update.Thank you for your time and consideration.
Hello,We attempted to implement an email reminder workflow with native Zendesk features, but during testing we confirmed that the platform’s current limitations make it impossible to create reliable reminders at intervals shorter than 60 minutes (for example 5, 10, 15, or 20 minutes).At the moment, this is not possible natively because:Automations run only once per hour, so they cannot trigger reminders at precise times like 5, 10, 15, or 20 minutes. SLAs only work when the customer is the last person who replied. In our workflows, the last reply is usually from an agent, meaning SLAs do not activate and cannot measure elapsed time after agent updates. Triggers cannot run on time-based delays.Because of these limitations, there is no native Zendesk feature that can send timely reminders based on time since last agent update or time spent in a status (e.g., Pending).To support operational workflows like this, we would like to share the below features that could enhance Zendesk's capabilities:Time-Based Trigger Conditions - Allow triggers to include: “Minutes since status changed” ; “Minutes since last agent update” ; “Minutes since last public or internal comment” . Triggers would then be able to fire reminders precisely and instantly. Faster Automation Execution - Allow automations to run after configurable intervals (5, 10, or 15 minutes), not only once per hour.These enhancements would make it possible to manage time‑sensitive workflows entirely within Zendesk, without external tools, reducing operational risk and improving internal responsiveness.Thank you.
It would be great if the bot could narrow the search for articles to a specific section of the Guide in order to optimise the search results. It also would apply to the search box of the Guide. Let me set an example: When you build a flow for a bot / AI agent and when you buid a form, it is reasonable to use a ticket field to ask the users for details, for instance to get information about the kind of product s/he is looking for and eventually to route the ticket. Many times this “ask for details” has a certain relationship with the Guide's section, for instance if you have a section for any kind of product. And many times we have similar articles for different products (price, warranty, etc.), what could mislead the AI Agent to provide information from the wrong product. This is why I think it would be wonderful if the details provided by the final user could lead to focus the search to a specific section of the Guide.
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK