Okay, so today I’m gonna walk you through this thing I was messing with called “older sister and little brother”. It’s not as weird as it sounds, I promise! Basically, I was trying to get two different machine learning models to kinda… talk to each other, and learn from each other. Think of it like an older sister showing her little brother the ropes, but with algorithms.

The Setup:
- I had Model A, the “older sister”. This one was already trained on a decent-sized dataset and knew its stuff. It was pretty good at classifying images.
- Then there was Model B, the “little brother”. This one was fresh out the box, untrained, and kinda clueless. It needed some guidance.
How I started:
First, I grabbed a bunch of images. Not the ones Model A was trained on, fresh, unseen stuff. Then, I fed those images into Model A, the older sister. Model A did its thing and spat out predictions – basically, what it thought each image was.
Generating the “teaching” data:
This is where it got interesting. I took Model A’s predictions and treated them like they were the gospel truth. Like the older sister knows best, right? I used these predictions as the “labels” for the images, even though they weren’t the real labels. This created a new, synthetic dataset where the images were labeled by Model A’s guesses.
Training the little brother:
Next, I took this synthetic dataset and used it to train Model B, the little brother. Model B was basically learning from Model A’s mistakes and successes. It was trying to mimic what the older sister already knew.
The Loop:
I didn’t stop there. After training Model B for a bit, I wanted to see if it had learned anything useful. So, I grabbed a new set of images (again, unseen by both models) and had Model B make its own predictions. Then, I compared Model B’s predictions to Model A’s.
Did it work?:

Well, kinda. Model B definitely improved after being “taught” by Model A. Its accuracy went up, and it started to agree with Model A more often. But it wasn’t perfect. Model B still made mistakes, and sometimes it even disagreed with Model A on images that Model A got right.
Where I messed up(maybe):
Looking back, I think the biggest issue was that Model A wasn’t perfect to begin with. It had its own biases and blind spots. So, Model B was essentially learning those same biases. It was like the older sister teaching the little brother her bad habits.
Next Steps(if I cared to continue):
If I were to try this again, I’d probably start with a better “older sister” model, one that’s more accurate and less biased. I might also try adding some real, human-labeled data to the mix, to give the “little brother” a more balanced education. It was a fun experiment, though, and I learned a lot about how different models can interact and influence each other. Maybe you guys can try it out and come up with better ideas.
