Building Myself a English-to-English Translator in Slack
English is English, right?
I’m not sure if it’s just me, but this feels increasingly less true as AI threatens to take over the way that we think and communicate. I’ve been working in product for over a decade, and I should be used to the role of connecting the business world with the technical one… except it feels like now it’s harder than ever to use common language.
I’ve written about subjectivity in reality before, and that’s something that is becoming more obvious and apparent to me on a day to day basis. As someone who is neurodivergent, I’m actually relatively used to the gap of how I see the world and how others do, but it seems like it’s been expanded to an entirely new scale lately with AI.
One of the biggest challenges in this space is seeing this huge gap in context across individual experiences, and how it’s amplifying workplace dysfunction and miscommunications. Companies lose hundreds of thousands of dollars per year on unresolved communication and coordination issues, waste 19 hours per week on knowledge-sharing inefficiencies, and disengaged employees cost employers more than $154 billion annually. It is not only bad for business, it’s bad for individuals. A 2023 study by Asana found that 88% of product managers feel stressed often or all of the time, and the increasing pressure for PMs to become ‘super ICs’ doesn’t solve the loneliness challenges that young American adults are facing. During a time of significant economic uncertainty and upheaval, organizational pressures are mounting.
So what is one to do?
Write an English-to-English translator.
import os
import requests
import logging
from slack_bolt import App
from slack_bolt.adapter.flask import SlackRequestHandler
from flask import Flask, request, make_response
logging.basicConfig(level=logging.DEBUG)
def load_prompt_template():
try:
with open("prompt_template.txt", "r") as file:
return file.read().strip()
except Exception as e:
logging.error("Error loading prompt template: %s", e)
# Fallback default prompt if file isn't available.
return "Read the provided context and make suggestions on how to reframe it for a professional environment. Context:"
prompt_template = load_prompt_template()
app = App(
token='YOUR_APP_TOKEN_HERE',
signing_secret='YOUR_SIGNING_SECRET_HERE'
)
flask_app = Flask(__name__)
handler = SlackRequestHandler(app)
def get_ollama_response(user_input):
url = "http://localhost:11434/api/generate"
full_prompt = f"{prompt_template} {user_input}"
payload = {
"model": "qwen2.5",
"prompt": full_prompt,
"stream": False # Ensure a complete, non-streamed response.
}
try:
logging.debug("Sending request to %s with payload %s", url, payload)
response = requests.post(url, json=payload)
logging.debug("Response status code: %s", response.status_code)
logging.debug("Response text: %s", response.text)
response.raise_for_status()
json_data = response.json()
return json_data.get("response", "No response from Ollama.")
except requests.RequestException as e:
logging.error("Error communicating with Ollama: %s", e)
return f"Error communicating with Ollama: {e}"
@app.event("app_mention")
def handle_app_mention(body, client, logger):
event = body.get("event", {})
user = event.get("user")
channel = event.get("channel")
text = event.get("text", "")
response_text = get_ollama_response(text)
try:
client.chat_postEphemeral(
channel=channel,
user=user,
text=response_text
)
except Exception as e:
logger.error("Error sending ephemeral message: %s", e)
@flask_app.route("/slack/events", methods=["POST"])
def slack_events():
data = request.get_json(silent=True)
logging.debug("Received Slack event: %s", data)
if data and data.get("type") == "url_verification":
challenge = data.get("challenge")
logging.debug("Responding to URL verification with challenge: %s", challenge)
return make_response(challenge, 200, {"Content-Type": "text/plain"})
return handler.handle(request)
if __name__ == "__main__":
port = int(os.environ.get("PORT", 3000))
flask_app.run(host="0.0.0.0", port=port)
What it does
The python file above creates a basic Flask App that handles app routes from Slack messages and queries a local Ollama instance. The app is designed to work in both DM form and in group channels when it is @mentioned, which theoretically could help people in a team collaborate on framing a particular message. Right now, I’m routing it through ngrok
What it doesn’t do
Most of what I want it to, at this stage, but it’s hard to test a multi-user workflow alone in a single-person Slack workspace. Now that I’ve gotten the bones of the app in place, I’m going to be playing around with some additional capabilities – like being able to send a few recent messages from a channel as additional context for the LLM, or being able to hook this up to a RAG embedding system so that it has better context for my projects to construct more accurate responses. I also am (again) finding myself ready to re-invest some more time into my local LLM fine-tuning project, because I want the responses to read more like something that I would actually say.
Can this actually help?
I don’t know yet. I just built it this morning. So far, it seems promising, though like any AI application that relies on LLMs, it will be a lot better once I:
- Spend time working on refining the prompt. I’ve recently been experimenting with asking ChatGPT to generate prompts for me, and it definitely (“Certainly!”) helped with reducing some of the “AI tells”, but there’s a long way to go to make it sound less mechanical.
- Test different models and prompt combinations. I’m using Transformer Lab to run different experiments, and right now, I default to using Qwen2.5 because it works quickly on my laptop and has reasonable enough answers for the time being, but I wouldn’t be surprised if a different model that was fine-tuned on more casual conversations had better results.
- Fine-tune whatever model is most promising on my own Q&A dataset, which I’m slowly working through.
That all said, I’ve already started using it to help me craft responses and messages to send to my team in Slack. It’s a faster workflow than swapping over to ChatGPT, though it’s not as feature-rich as Goblin.Tools, which I was introduced to after talking through some of the challenges that I’ve been facing with some co-workers.
All said, I’m pleased with how quickly I was able to get this up and running, and this feels like a meaningful first step in figuring out how to make a useful, context-aware professional communication coach. More to come!