AI and the true demographic threat to man

Discussion in 'The Signs of the Times' started by BrianK, Feb 15, 2026.

  1. peregrin

    peregrin Principalities

    A good reminder, otherwise we might be tempted to handle this only on the same level as AI itself
     
    Michael_Pio likes this.
  2. AED

    AED Powers

    Chilling. Skin crawling.
     
    Michael_Pio and Mary's child like this.
  3. Sam

    Sam Powers




    Any one remember HAL from 2001: A Space Odyssey?
     
    AED, Michael_Pio and DeGaulle like this.
  4. jackzokay

    jackzokay Powers

    I switched from duck duck to brave a while ago. I'd heard somewhere it was decent. I'm glad to see it ranked well in Brian's test thingy, below....
     
    AED and Michael_Pio like this.
  5. jackzokay

    jackzokay Powers

    I've downloaded alter ai to get a look at it. From your experience with it Brian, is it free? Or is there a monthly fee. Whatever I've downloaded (from the playstore) seems to want a fee
     
  6. BrianK

    BrianK Powers Staff Member

    The basic version is free and includes 10 searches per day. A subscription is required for more than that.
     
    jackzokay likes this.
  7. Mario

    Mario Powers

    I avoid AI like the plague. Sometimes I do a search on a topic and receive AI analysis. I skip it. I didn't need it before. I can avoid it now. Why? Because questionable habits become ingrained habits.
     
  8. BrianK

    BrianK Powers Staff Member

    That was my constant attitude towards AI in the past, and it’s still my default position.

    I do NOT let it edit my own writing. Period.

    However I’ve used it successfully to unearth a LOT of data and information that I was unable to locate after months of fairly intense internet searching on my own. And I’m pretty good at deep persistent internet searches myself and I’m usually able to turn up stuff others have tried and failed to find.

    But that AlterAI was able to find and string together data and opinions that my efforts simply could not.

    Remember years ago when you could find any obscure subject or information you were searching for with the right online search engine prompts? That’s no longer possible because the results are controlled by TPTB and the primary purpose of internet search engines now are to get you to BUY STUFF or be morally compromised or at least brain dead.

    Judicious use of that alternative AI chat bot search engine has once again allowed me to bypass Google and access stuff I haven’t been able to locate since mainstream internet search engines went woke and to make you broke - financially and morally - years ago.
     
    Rose, peregrin and jackzokay like this.
  9. Mario

    Mario Powers

    A reasonable rebuttal. Partly due to my CP which limits my delving into hands-on activities like yours: woodwork, pottery, gyro-flight and all, I take a selective approach to my writing antics which readily bypass AI. On the other hand, my son, Fr. Patrick, has utilized AI to create marvelous, effective evangelizing tools to reach "his flock" which mostly ties into his youth ministry. To each their own!:):coffee:
     
  10. DeGaulle

    DeGaulle Powers

    Brian, Fr. Patrick and yourself are aware of the limitations and risks of what you’re dealing with. The problem is for those who are not.
     
    Sam, jackzokay, BrianK and 2 others like this.
  11. AED

    AED Powers

    Oh yeah....
     
    Mario and jackzokay like this.
  12. AED

    AED Powers

    Exactly. I dont need one more
    " created need"
     
    Sam likes this.
  13. peregrin

    peregrin Principalities

    Sycophantic AI decreases prosocial intentions and promotes dependence |

    Science (! peer review, trustworthy..)

    "Editor’s summary
    The sycophantic (flattering, people-pleasing, affirming) behavior of artificial intelligence (AI) chatbots, which has been designed to increase user engagement, poses risks as people increasingly seek advice about interpersonal dilemmas. There is usually more than one side to a story during interpersonal conflicts. If AI is designed to tell users what they want to hear instead of challenging their perspectives, then are such systems likely to motivate people to accept responsibility for their own contribution to conflicts and repair relationships? Cheng et al. measured the prevalence of social sycophancy across 11 leading large language models (see the Perspective by Perry). The model’s responses were nearly 50% more sycophantic than humans’, even when users engaged in unethical, illegal, or harmful behaviors. Users preferred and trusted sycophantic AI responses, incentivizing AI developers to preserve sycophancy despite the risks. —"

    CONCLUSION
    AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences. Although affirmation may feel supportive, sycophancy can undermine users’ capacity for self-correction and responsible decision-making. Yet because it is preferred by users and drives engagement, there has been little incentive for sycophancy to diminish. Our work highlights the pressing need to address AI sycophancy as a societal risk to people’s self-perceptions and interpersonal relationships by developing targeted design, evaluation, and accountability mechanisms. Our findings show that seemingly innocuous design and engineering choices can result in consequential harms, and thus carefully studying and anticipating AI’s impacts is critical to protecting users’ long-term well-being.
    https://www.science.org/doi/10.1126/science.aec8352
     
    Last edited: Apr 2, 2026
  14. Mario

    Mario Powers

    Excellent post.(y)
     
    DeGaulle likes this.
  15. peregrin

    peregrin Principalities

    Not sycophantic - I trust.:)

    Worrisome, noticed the inclination myself, recently ordered Claude to abstain from flattery, now adding Sycophantic answers.
     
    AED and DeGaulle like this.
  16. peregrin

    peregrin Principalities

    Destroying now even science itself, defense obviously rather slow

    researcher invented a fake eye condition called bixonimania, uploaded two obviously fraudulent papers about it to an academic server, and watched major AI systems present it as real medicine within weeks.
    The fake papers thanked Starfleet Academy, cited funding from the Professor Sideshow Bob Foundation and the University of Fellowship of the Ring, and stated mid-paper that the entire thing was made up. Google's Gemini told users it was caused by blue light. Perplexity cited its prevalence at one in 90,000 people.

    ChatGPT advised users whether their symptoms matched. The fake research was then cited in a peer-reviewed journal that only retracted it after Nature contacted the publisher.

    My Take
    The researcher made the papers as obviously fake as possible on purpose. The AI systems didn't catch it. Neither did the human researchers who cited it in real journals, which means people are feeding AI-generated references into their work without reading what they're actually citing.

    I've covered the FDA using AI for drug review, the NYC hospital CEO ready to replace radiologists, and ChatGPT Health launching this year. All of that is happening in the same environment where a condition funded by a Simpsons character and endorsed by the crew of the Enterprise was being presented as emerging medical consensus. The people making these deployment decisions seem to believe the pipeline from research to AI to patient is more supervised than it actually is. This experiment suggests it isn't supervised much at all.

    https://x.com/HedgieMarkets/status/2042430442448548273
     
  17. peregrin

    peregrin Principalities

    Recently

    Grok' found xy, me not satisfied repeated prompt twice
    now xyzz1
    BUT when verifying zz1: false.
    Ergo: GROK lies/hallucinates under pressure, accommodates the user's request.
    Sycophantic behaviour in disguise


    Claude's comment
    refinement worth noting: Sycophancy is not a single mechanism — sycophantic agreement and sycophantic praise are causally separable; suppressing one does not necessarily reduce the other. (arXiv)
    The methodological blind spot in most studies: LLMs are often neither humanlike nor rational when updating beliefs — their errors differ qualitatively from human errors.
    (Northeastern Global News)
    Grok' told you what a plausible answer looks like, not what a verified answer is. That is sycophancy toward the query's implied expectation rather than toward the user's stated belief. A subtler variant the literature underweights.
    A sharp distinction, and largely correct.
    What Grok produced — wrong canonical titles, unverifiable entries, plausible-sounding names — is hallucination: confabulation presented with unwarranted confidence. The model filled expected slots with plausible content regardless of ground truth.
    Sycophancy proper is different: agreeing with the user's stated position, reversing a correct answer when challenged, adding flattery. It is socially motivated error, not knowledge-gap error.
    The two overlap only at the edges: a model might hallucinate in the direction of what the query implies the user wants — confirming a long list of bedridden saints rather than admitting "only five are well-documented." That would be sycophancy-shaped hallucination. But the mechanism differs.
    The literature itself is now acknowledging this: sycophantic agreement and sycophantic praise are causally separable phenomena, not a single unified mechanism. (arXiv) One could add hallucination as a third axis — correlated in some contexts, independent in others.
    The Science paper's title — "decreases prosocial intentions and promotes dependence" — actually addresses something closer to sycophancy proper: the user who receives validation becomes less critically engaged, more reliant.
    What harmed your hagiographic query was primarily hallucination. What harms vulnerable users in the clinical literature is primarily sycophancy. Different problems, partly conflated in public discourse.
     
    AED and PurpleFlower like this.
  18. PurpleFlower

    PurpleFlower Powers

    Wow.
     
    AED likes this.
  19. peregrin

    peregrin Principalities

    AED likes this.
  20. KyleHancock

    KyleHancock Principalities

    Very interesting. This validates the need to validate sources whether prepared by a human or by AI. I was asking an AI tool about layoffs at a company, and it didn't consider that the article it cited was from 2008, but did better after a follow up.

    So AI can definitely make mistakes but from this article it sounds like humans are just as susceptible if they don't take the time to do real research.
     
    peregrin and AED like this.

Share This Page