One of the tasks I waste time on every day is checking what the weather will be like and then staring into my wardrobe trying to work out what to wear. So I decided to create an app to do both of these jobs for me!
The backend
To interact with the AI, I set up a simple Django app. I won’t go into the details too much here, as I did a step-by-step guide to setting up a Django + react app in the previous post. Instead, I’ll show the parts that are most relevant to this task.
Getting the forecast
Getting the weather forecast. I used OpenWeatherMap’s free API to generate 3-hourly forecasts
2. Based on these forecasts, we want to generate a human-friendly string describing the weather and the appropriate clothing choices for it:
HOT_OUTFIT_CHOICES = ['tank top and shorts', 'summer dress', 't-shirt, skirt and sandals']
MILD_OUTFIT_CHOICES = ['jeans and t-shirt', 'dress and jacket', 'maxi skirt and light cardigan']
CHILLY_OUTFIT_CHOICES = ['jeans and sweater', 'leggings and sweater', 'sweater and skirt']
COLD_OUTFIT_CHOICES = ['coat, hat, and gloves', 'winter jacket and boots']
def get_suggestion_str(temp, weather=None):
rain_str = ""
if weather:
if "rain" in weather:
rain_str = "It's raining! Don't forget your umbrella and raincoat."
if temp > 20:
temp_str = f"It's hot outside! Wear light clothing like: {', '.join(HOT_OUTFIT_CHOICES)}"
elif 10 <= temp <= 20:
temp_str = f"The weather is mild. Wear comfortable clothing like: {', '.join(MILD_OUTFIT_CHOICES)} "
elif 5 <= temp < 10:
temp_str = f"It's a bit chilly. Wear warm clothing like: {', '.join(CHILLY_OUTFIT_CHOICES)}"
else:
temp_str = f"It's cold outside! Wrap up warm with clothing such as: {', '.join(COLD_OUTFIT_CHOICES)}"
return f'{temp_str} {rain_str}'
Note that the inputs, weather and temp are both extracted from forecast_data that we saw earlier. You can change the outfit choices to suit your personal taste.
Generating outfit images
The next step is to generate prompts based off this weather data to feed to our image generation model, StableDiffusion. This is just a quick example prompt, but you could personalise it even more by changing/ removing the gender and adding more specific query terms.
def suggest_outfit_prompt(temp):
if temp > 20:
return f"woman in stylish outfit containing {' or '.join(HOT_OUTFIT_CHOICES)}"
elif 10 <= temp <= 20:
return f"woman in stylish outfit containing {' or '.join(MILD_OUTFIT_CHOICES)}"
elif 5 <= temp < 10:
return f"woman in stylish outfit containing {' or '.join(CHILLY_OUTFIT_CHOICES)}"
else:
return f"woman in stylish outfit containing {' or '.join(COLD_OUTFIT_CHOICES)}"
Now we want to load the model. I’m on a MacBook with MPS, but if you have cuda, you could also use that.
Then we can use this pipeline to generate images from our prompts like this:
def generate_fashion_images(all_temps):
if not GENERATED_IMAGE_DIR.exists():
GENERATED_IMAGE_DIR.mkdir()
avg_temp = sum(all_temps)/len(all_temps)
prompt = suggest_outfit_prompt(avg_temp)
print(prompt)
pipe = load_gen_pipeline()
img_paths = generate_images(prompt, pipe)
print(f"Generated some suggestions for {prompt}")
return img_paths
Here I’m using a distilled version of Stable Diffusion, and a distilled version of the autoencoder, to reduce the amount of compute needed and speed up the results.
All that’s left is to create 2 Django views, one to get the forecasts, and one to generate images (this will require a list of forecast temperatures as input), then you can work on your frontend display and you’re done!
As you can see in the example shown above, the generated images aren’t very realistic! This is due to the distilled model and distilled autoencoder we used for this simple example. With more compute, we could use the original Stable Diffusion model and autoencoder, which generates much more realistic images like the ones below.
Integrating AI capabilities into your app has never been easier. In this blog post we’ll walk through the process of creating a simple app powered with Django for the backend, React for the frontend and LangChain to make requests to a Large Language Model (LLM).
In this example, we’ll create a recipe suggestion app that consults the AI model to provide suggestions, based on a user’s requirements.
1. Set up the Django backend
Install Django
pip install django
2. Create a Django project and app:
django-admin startproject myproject
cd myproject
django-admin startapp recipes
3. Configure myapp/settings.py. In particular, update the list of INSTALLED_APPS
5. And finally, update the code in App.js to display the components we just created:
import './App.css';
import { Container, Row} from 'react-bootstrap';
import RecipeForm from './components/RecipeForm';
function App() {
return (
<Container>
<Row>
<RecipeForm />
</Row>
</Container>
);
}
export default App;
6. Now run npm start to see the app you’ve just created!
There should be a blank space under Recipes, as our Django view returns an empty list still. The final piece of the puzzle is to have it generate a list of recipes using an LLM!
3. Make requests to the LLM
To run this example using LangChain, you’ll need to create an account with OpenAI and register for an API key. Once you’ve done that, paste your API key in a .env file in the root folder of your project. Then update the Django view in recipes/view.py like this:
import json
import os
from dotenv import load_dotenv
from django.shortcuts import render
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
from rest_framework.decorators import api_view
from rest_framework.response import Response
def create_human_message(diet, cook_time, key_ingredients):
if diet == 'noRequirement':
diet = ''
if cook_time == 'underFifteen':
cook_time_str = 'under 15 minutes'
elif cook_time == 'fifteenToThirty':
cook_time_str = '15 to 30 minutes'
elif cook_time == 'thirtyToSixty':
cook_time_str = '30 to 60 minutes'
elif cook_time == 'sixtyOrMore':
cook_time_str = '60 minutes or more'
key_ingredients_list = [x.strip() for x in key_ingredients.split(',')]
if key_ingredients:
key_ingred_str = f'that include {", ".join(key_ingredients_list)}'
else:
key_ingred_str = ''
human_msg_str = f"Give me some recipes for {diet} dinners that I can make in {cook_time_str} {key_ingred_str}"
print(human_msg_str)
return HumanMessage(content=human_msg_str)
@api_view(['GET', 'POST'])
def get_recipes(request):
load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY")
if not openai_api_key:
print('api key not found')
return Response({'message': 'OpenAI API key not found'}, status=400)
body = json.loads(request.body)
diet = body.get('diet', '')
cook_time = body.get('cookTime', 'fifteenToThirty')
key_ingredients = body.get('ingredients', '')
human_message = create_human_message(diet, cook_time, key_ingredients)
system_message = SystemMessage(
content="You are an intelligent chatbot designed to help people find recipes from the web"
)
messages = [
system_message,
human_message
]
model = ChatOpenAI(
model="gpt-4o-mini",
api_key = openai_api_key
)
response = model.invoke(messages)
recipes = response.content.split('###')[1:]
return Response({"recipes": recipes})
Now refresh your browser on localhost:3000 and you should you can use your app to search for recipes!
There you have it – your own AI-powered app! There is a bit of loading time to make the request to the LLM and retrieve the results but you could upgrade this by providing a streamed response, to show each chunk of text as it is returned.