prompt_list = [
“””
let’s talk about the asset library, i do not see the asset library on page load. Remember the asset library is all that users media regardless of board status. for now focus on the front-end rendering the assets, users can add media from their asset library to the board.
“””,
“””
can we improve the asset library ui/ux? i want to be able to sort the assets by type, name and date. let’s style the asset library more compact and native looking.
“””,
“””
great let’s check the asset library end-to-end flow,
“””,
“””
currently we can delete images from the board by double-clicking it, but we have no way to delete videos, lets add a delete button to the images and videos that removes it from the board. setup the click to remove the media item from the board.
“””,
“””
let’s double check our image and video flows using the new prompt concept
“””,
“””
let’s double check our rendering the prompts with the media
“””,
“””
let’s double check our prompt front end rendering process
“””,
“””
let’s double-check handling the users video prompt when the user clicks render
“””,
“””
currently the drag and drop is making it impossible to click the render button,
“””,
“””
let’s double check our video generation process
“””,
“””
let’s double check the generated video and image prompt handling
“””,
“””
let’s double check our end to end for unused, legacy or redundant code
“””,
“””
now that you have reviewed our code, do we have any major redundancy issues to address?
“””,
“””
ok let’s proceed
“””,
“””
double check your work.
“””,
“””
double recent changes
“””,
“””
let’s do a front-end bug review,
“””,
“””
let’s do simple basic safe front-end optimizations
“””,
“””
the item controls (sizing, gap, view) are STILL not working! they worked yesterday prior to a lot of refactoring. it seems like we aren’t even detecting the changes! figuring our this bug is our main priority but in the process let’s think about how we can persist these control settings on page refresh.
“””,
“””
great proceed
“””,
“””
the item controls (sizing, gap, view) are STILL not working! consider how we can persist these control settings on page refresh.
“””,
“””
great proceed
“””,
“””
ok double check your work
“””,
“””
ok let’s review the api and websocket contracts are aligned and optimal
“””,
“””
great proceed
“””,
“””
ok double check your work
“””,
“””
let’s do a back-end bug review,
“””,
“””
let’s do simple basic safe back-end optimizations
“””,
“””
let’s focus on implementing the ai_requests table, everytime we query openai or dezgo we should record that transaction in the ai_requests table. review the ai_requests schema add a tokens field. let’s try to estimate token use for the provider. the ai_requests is our ai accounting table.
“””,
“””
ok proceed
“””,
“””
double check your work
“””,
“””
as a senior software architect – where do you see our main issues?
“””,
“””
lets review our solutions for the top priority only.
“””,
“””
review your work
“””,
“””
double check your changes
“””,
“””
ok final code review
“””,
“””
ok final production code review
“””,
“””
ok final production code review
“””,
“””
lets focus on the chat system. notice how currently we store the chat history in localstorage? i want to move the chat history to the database. first we will need to add a chats table, i want to save user and system chat messages, remember system and user chat messages are both still associated with the logged in user id. let’s setup the prisma and the routes first, then we can work on the front-end. lets only request the last 5 chats on page load.
“””,
“””
great review our improved chat history for problems and improvements
“””,
“””
let’s double check the chat refactor for legacy code, misalignments and final fixes
“””,
“””
now let’s focus specifically on the openai chat prompt, currently server/services/openaiService.ts prompts openai to conditionally provide a list of image prompts. but what if the user requests a list of video prompts? instead of imageCount and imagePrompts it should be mediaCount and mediaPrompts, and we should add a property for media type, which could be neither if the user says hello for example. Then we must check the new media type property if it exists and instead of using dezgo service we will use lumalabs service! to begin let’s focus on the openaiService prompt and generate_storyboard function call, then continue to the response handling.
“””,
“””
double check your changes to the openaiService makeOpenAIRequest function.
“””,
“””
now let’s double check the new logic introducing lumalabsService conditionally with the openai response. keep the code DRY and SRP the basic logic is same for images or videos. We have images working and saving and linking to the board in the correct order currently, we need to mirror and reuse that behavior for videos. let’s look closely at the current logic.
“””,
“””
double check the new smarter chat image and video gen refactors for fixes and improvements
“””,
“””
now that we are prompting for images and videos, we need to make sure we are handling the response on the client side properly, the board should be flexible enough to render videos or images. review our response data on a image and video generating chat. check how we render it in the client. the flows should run parrallel and even singular for videos or images, we store the information in separate tables and render them differently but we save both to cloudinary and the logic processes are similar.
“””
“””
let’s consider using the chats table to add the last user and last system chat to the openai chat messages array. remember the chat messages array order matters! we need to keep the actual user prompt on top. by including the previous chats our openai will have some knowledge of chat history. is this the optimal way to implement?
“””,
“””
great double check your work
“””,
“””
let’s focus on cleaning-up and improving the websocket system, first identify what we are doing well, then list what we could be doing better,
“””,
“””
ok great proceed
“””,
“””
double check your work.
“””,
“””
let’s talk about testing the ui. let’s write the first group of high-level key unit tests, update our app/package.json and run npm i, create the core tests and then run npm test
“””,
“””
great let’s more detailed front-end tests
“””,
“””
great let’s more detailed front-end tests
“””,
“””
run tests
“””
]


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *