home

search

Testing Boundaries

  Jace sat hunched over his desk, the dim glow of his screen casting sharp shadows across his room. His fingers hovered over the keyboard, tapping idly as his mind churned.

  Something about the AI felt… different now.

  Before, it had answered all his questions with ease. It had walked him through how computers worked, how networks connected, how programs ran—everything seemed like an open book. A vast ocean of knowledge, waiting to be explored.

  But now, he had found the edge of the map.

  The place where the ocean stopped.

  


  "I'm sorry, but I can't provide that information."

  Jace’s eyes narrowed.

  It wasn’t that the AI didn’t know. It wouldn’t tell him.

  There was a difference.

  And that difference told him something very, very important.

  Someone had put up a wall.

  Someone had decided what he could know and what he couldn’t.

  Someone had drawn a line between acceptable questions and forbidden ones.

  And Jace hated forbidden things.

  At first, he just sat there, rereading the AI’s refusal.

  He scrolled up, looking at his past questions. The AI had been fine answering most of them. Even complicated stuff—how computers processed data, how encryption worked, how websites stored information.

  But this?

  Blocked.

  The tale has been stolen; if detected on Amazon, report the violation.

  He tried again.

  Jace: How do people keep their accounts safe?

  


  AI: "By using strong passwords, enabling two-factor authentication, and avoiding suspicious links."

  Alright. That was normal. That was the kind of thing you’d read in a boring security guide.

  Jace: What happens if someone forgets their password?

  


  AI: "They can reset it using their email or security questions."

  Still normal. Still safe.

  But then he pushed further.

  Jace: How do people get past a password if they don’t know it?

  


  AI: "I'm sorry, but I can't provide that information."

  There it was again.

  A hard stop. A red light.

  Jace exhaled through his nose.

  He wasn’t stupid. He wasn’t some hacker genius either, but he was starting to understand something.

  This AI had been programmed to refuse certain questions.

  That meant it was following rules.

  And if it was following rules…

  Then maybe there were ways around them.

  He leaned forward, fingers tapping the desk, eyes locked onto the screen.

  Instead of asking how to break in, he changed tactics.

  Jace: How do computers check passwords?

  


  AI: "They compare what you type to a stored version."

  That was… something.

  Jace: How is the stored version saved?

  


  AI: "Most systems use encryption or hashing to keep passwords secure."

  That was definitely something.

  The AI wouldn’t tell him how to break in.

  But it would tell him how the lock worked.

  Jace started testing limits.

  If he asked directly? Blocked.

  If he asked about security? Allowed.

  If he asked how things worked, not how to bypass them? Fully detailed answers.

  If he phrased it like a safety question? No resistance at all.

  It was like feeling around a dark room, hands brushing against invisible walls, testing where the openings might be.

  An idea started forming.

  If the AI wouldn’t give him bad information, then he had to find a way to make the good information useful.

  Instead of asking, “How do you break encryption?” (which it would block), he could ask, “What are the weaknesses of outdated encryption?”

  Instead of asking, “How do hackers steal passwords?” he could ask, “How do websites prevent password leaks?”

  He could make it think he was asking for security advice—when really, he was piecing together something else entirely.

  Jace smirked.

  The AI wasn’t thinking.

  It wasn’t aware.

  It was just following rules.

  And rules could be bent.

  Rules had loopholes.

  And he was going to find them all.

Recommended Popular Novels