Astro Pema AI

Service Scope

Astro Pema AI provides professional local and remote Linux system administration for small and medium size businesses, organizations, and individuals. Services include secure server setup, configuration, maintenance, and troubleshooting for web, mail, and database servers, as well as proactive security hardening and log analysis.

Why Proper Linux Administration Matters

Linux servers power much of the modern internet, from websites and email platforms to databases and file storage. Having a skilled administrator ensures your systems are secure, efficient, and always available without the overhead of an in-house IT department.

Remote administration allows for real-time monitoring, immediate troubleshooting, and scheduled maintenance from anywhere, ensuring minimal downtime and optimal performance for mission-critical services.

Core Service Areas

Web Hosting Management: Secured Apache Web Server Configuratiosn, TLS certificates, performance tuning.

Email Services: Secured Postfix/Dovecot setups, SPF/DKIM/DMARC compliance, spam filtering.

Security Hardening: Firewall configuration, intrusion detection systems, VPN setup.

Log Monitoring & Analysis: Real-time log review, anomaly detection, incident response.

System Updates: Kernel upgrades, package management, dependency resolution.

Storage & Backup: RAID configuration, automated backups, recovery planning.

Performance Optimization: Resource tuning, database indexing, cache configuration.

Service Goals

Maintain 99.9% uptime for all managed systems.

Ensure all services meet modern security compliance standards.

Proactively detect and mitigate threats before impact.

Provide clear documentation and reporting for all maintenance.

Offer flexible support packages tailored to client needs.

Key Advantages

Fully Remote: No need for on-site presence; immediate response from anywhere.

Cost-Effective: Reduce expenses by outsourcing specialized Linux administration.

Scalable: Support from single server to multi-node clusters.

Secure: All access conducted via encrypted channels with multi-factor authentication.

Expertise: Over a decade of Linux administration experience across multiple distributions.



In-House Linux Server Stack

I can assist businesses and individuals in setting up their own in-house Linux servers, fully integrated into the local area network and protected by a properly configured firewall. This allows you to host mail, web, and other critical services on-site, ensuring maximum control and privacy. With Linux, even older laptops or desktops can be securely repurposed, creating a cost-effective, private server solution where the only limitations are the capabilities of your hardware—not the restrictions of commercial hosting providers.

Owning your own server does not requires expensive, enterprise-grade hardware. With Linux, even older laptops or desktops can be repurposed into secure, private mail and web servers. This approach offers complete control over your data and services without the artificial restrictions imposed by commercial hosting providers. The only true limitations are the capabilities of the hardware you choose, not the arbitrary quotas or pricing tiers set by a third-party host.

Why In-House (vs. Cloud)

Data sovereignty: Physical control of disks, network, and backups.
Cost control: Fixed capex; no bandwidth/egress surprises.
Performance: Tune filesystems, caching, and hardware for your workloads.
Multi-service: Mail, Web, Files, VPN, Git, Monitoring.

Key Considerations

Security: Patch cadence, firewall, IDS/IPS, backups, monitoring.
Reliability: RAID, ECC RAM if possible, UPS, off-site backups.
Connectivity: Static IPs preferred; verify ISP policy.
Compliance: SPF/DKIM/DMARC, TLS everywhere.

Reference Architecture

Web: Apache/Nginx + PHP/Python
Mail: Postfix + Dovecot + Rspamd + DKIM/DMARC/SPF
Storage: ZFS or md-RAID + LVM; Samba/NFS; optional Nextcloud
Network: WireGuard VPN; iptables/nftables + ipset
Monitoring: Prometheus + node_exporter; fail2ban; journald/syslog

External → Edge FW → Linux Host
VLANs optional: 10=DMZ, 20=LAN, 30=storage
Open ports minimal: 25, 80, 443, 993, 22 (restricted)

Security Baseline

Firewall default-deny inbound; allow only required ports.
ipset + CNN-GRU auto-blocker for web & mail.
Automatic updates; SSH key-only login.
Separate service accounts; minimal sudo.

Mail Server Notes

Get proper PTR (rDNS); ensure forward matches.
Publish SPF, sign with DKIM, enforce DMARC.
Enable postscreen + DNSBLs; require TLS.
Use Rspamd/SpamAssassin.


Tail logs and react in real-time; extract URL features.
Allowlist safe URLs (GET /, robots.txt, static assets).
No PTR + suspicious ⇒ block & add to ipset.

Backup & DR

3-2-1 rule: 3 copies, 2 media, 1 off-site (encrypted).
Snapshots + daily rsync/Restic/Borg.
Test restores quarterly.

Hardware Guidance

CPU with AES-NI; 32—64GB RAM; ECC if possible.
SSD mirror for OS; RAID-Z/RAID-10 for data.
Dual NICs; UPS for safe shutdown.

Deployment Checklist

Install LTS Linux; enable unattended upgrades.
Partition OS/data with proper mount options.
Configure firewall + ipset.
Set up Apache/Nginx + TLS.
Install Postfix/Dovecot + SPF/DKIM/DMARC.
Deploy CNN-GRU monitor + fail2ban.
Enable monitoring and backups.


Advanced Machine Learning Intrusion Detection

I develop and implement advanced machine learning-based intrusion detection systems for real-time server security monitoring. My work focuses on complementing traditional rule-based tools like fail2ban with hybrid CNN-GRU architectures that provide superior threat detection and response capabilities.

Model Architecture

Convolutional Neural Networks for spatial pattern recognition.
Gated Recurrent Units for temporal sequence analysis.
Analyzes server logs in real-time to identify sophisticated attack patterns across sessions, IPs, and time periods — something traditional signature-based systems consistently miss.

Performance Results

98—99% detection accuracy in proof-of-concept testing.
Significantly reduced false positives.
Sub-second response times suitable for production.

Adoption Challenges

Overcoming resistance to AI-based security.
Presenting empirical evidence showing CNN-GRU outperforms traditional methods by 10—15%.
Adaptive learning that automatically adjusts to new attack vectors without manual rules.

Threat Detection Strengths

Detects advanced persistent threats and coordinated multi-stage attacks.
Identifies behavioral anomalies missed by static rule sets.
Predicts attacker behavior and implements dynamic deception strategies automatically.

Security Evolution

From reactive, manually-configured tools to proactive, self-learning defense systems.
Adapts to the modern threat landscape in real-time.

Advantages of Running a Local LLM In-House

Hosting a large language model (LLM) on your own server gives you complete control over your AI capabilities without depending on the infrastructure or policies of major tech companies. With an in-house setup, you decide how the model is trained, fine-tuned, and updated—ensuring it reflects your specific needs, workflows, and data privacy requirements.

Unlike cloud-based LLM services, where your queries and data pass through external systems, a local deployment keeps all processing and storage under your direct control. This eliminates the risk of confidential or proprietary information leaving your network, providing compliance benefits in regulated industries and peace of mind for any security-conscious organization.

Performance is also in your hands. By running locally on GPU-optimized hardware, you remove the latency of internet connections and avoid rate limits or API restrictions imposed by third-party providers. Your LLM can respond instantly, 24/7, with no per-query costs or service interruptions caused by external outages.

Cost control is another key advantage. Once your hardware is in place, operational costs are limited to electricity and maintenance—no ongoing per-seat or per-request fees. You're not subject to sudden price hikes or tier changes from cloud providers, making budgeting predictable and sustainable.

Finally, in-house LLMs can be deeply customized. You can integrate them directly with your internal systems, train them on your proprietary datasets, and even modify their behavior at the code level. This flexibility is rarely possible with commercial closed platforms, giving you a competitive edge and full autonomy over your AI capabilities.

Hosting Notes

This web and mail server is hosted from a private home network in Ashland, OR, USA.
While rare, it could be temporarily offline due to upgrades, power, or telecom interruptions.
Thank you for your understanding!

Example of an AI based system that can be adapted to any specialty and that requires complex data analysis.

Overview: The Astro Pema AI Mythopoetic machine intelligence project reimagines astrology as a symbolic language for exploring consciousness. It combines traditional astrological logic, planetary pattern databases, and vector representations with modern SLM/LLM-based narrative synthesis.

System Goals: The goal isn't to reproduce astrology but to use it as a structure to generate symbolic prompts, reflect mythic intelligence, and explore emergent semantic space through language models.

Core Components

    PostgreSQL database (astropema) stores over 15000 unique curated astrological interpretations.

    Custom Python scripts handle birth chart parsing, JSON formatting, and prompt generation for language models.

    GGUF-compatible SLMs such as mistral-7b-instruct, mythomax and others to produce experimental structured symbolic narratives.

    Web interface (in progress) to let users generate their own charts and receive mythopoetic readings based on the natal chart synthesis of its own mythical meanings. Each chart is unique to the individual, because each planetary combination of keys is unique to each individual (when time of birth is included).

Tech Stack

    Python (LLM logic, JSON processing)

    PostgreSQL (chart + interpretation database)

    Frontend: HTML/CSS, PHP-based UI

Hardware

Astro Pema runs on a high-performance workstation with AI acceleration capabilities:

    CPU: Intel Core i9-14900F (24 cores, 32 threads, up to 5.8 GHz)

    Memory: 64GB DDR5-6000 (expandable to 128GB)

    GPU: NVIDIA GeForce RTX 5070 Ti (16GB VRAM) with CUDA 12.9 support

    Cache: 36MB L3, 32MB L2, optimized for AI workloads

    Architecture: x86_64 with VT-x virtualization

    AI Stack: Python environments with GPU acceleration for chart calculations

    Services: Web server, mail server, and LLM inference engine

Atari Deep Reinforcement Learning Project

Atari Lunar Lander Demo

Project Scope: This project explores deep reinforcement learning by training an AI agent to master classic Atari 2600 games—specifically, Breakout—using the Arcade Learning Environment (ALE) and the DQN, PPO and A2C architectures implemented through Stable-Baselines3. The primary objective is to train a model from scratch on local hardware using custom Python code, visual feedback via TensorBoard, and video capture of agent behavior across training milestones.

Why Atari Still Matters in AI

The path from video games to advanced artificial intelligence might sound like science fiction, but it's real—and it starts with Atari.

In 2013, DeepMind's groundbreaking work showed that a single deep neural network could learn to play dozens of Atari 2600 games using only raw pixel input and reward signals. The algorithm, known as Deep Q-Network (DQN), didn't need hand-crafted features or pre-programmed strategies—it learned by playing.

Atari games provided the perfect training ground: standardized environments, deterministic rules, visual complexity, and delayed rewards. Mastering them was a critical milestone in proving that deep reinforcement learning could handle real-world-like complexity.

Technical Stack

    Framework: PyTorch + Stable-Baselines3

    Environment: Gymnasium with ALE + custom ROM handling

    Model: DQN with CNN policy, experience replay, exploration decay

    Tooling: TensorBoard, imageio for video, cron-based job scheduling

    Hardware: Local CPU-based Linux machine (no GPU), Ubuntu 24.04, 16 GB RAM

Goals

    Achieve consistent episode rewards >30 in Breakout

    Learn to interpret TensorBoard metrics to inform architecture and hyperparameter tuning

    Develop a full feedback loop: train → evaluate → adjust → retrain

    Build resilience through CPU-only training and memory constraints

    Establish reproducible results through versioned models and logs

Deep Learning Cart Pole Simulation

AI Assisted Knowledge Database Example

Project Scope: This ongoing project explores the use of local small language models (SLMs) and larger hosted LLMs to generate, structure, and insert scientifically meaningful data into a custom PostgreSQL database. Our focus has been on medicinal plant knowledge from the Veracruz region in Mexico—leveraging generative models to synthesize structured information from minimal prompts (e.g., Latin names).

Pipeline Architecture

    Language Models:

      SLM: mistral-7b-instruct-v0.2.Q5_K_M.gguf, gemma-2-2b-it-Q4_K_M.gguf, westseverus-7b-dpo.Q4_K_M.gguf, estopianmaid-13b.Q4_K_M.gguf and others via original code to interface with the Llama stack.

      LLM fallback resources: Gemini, Claude, or OpenAI services via web interface.

    Runtime: Python script running locally via llama_cpp bindings (quantized models)

    Data Format: Output parsed and stored as structured PostgreSQL rows with raw output saved

Usage Philosophy

Rather than "extract" data, the models are tasked with synthesizing culturally rooted, biologically informed summaries. This combines computational creativity with traditional knowledge systems—respectfully and with attribution to the model as source. This work also aims to explore the role of language models in digital ethnobotany and modern herbology.

Personal Portfolio and Research Hub

Project Scope: This website is an evolving project—serving both as a digital portfolio and as a testing ground for design, data presentation, and backend interfacing. It is intended to showcase current technical projects, long-term research, and personal creative experiments across disciplines.

Technical Stack

    Frontend: HTML5, CSS3, embedded PDFs, video, and iframes

    Backend (planned): PHP for PostgreSQL interfacing

    Styling: Cosmic-themed CSS with glassmorphism and backdrop blur effects

    Hosting: Self-hosted on a Linux server using Apache2

Design Philosophy

The visual design prioritizes clarity and creative flow. Cosmic background imagery and glassmorphism effects give each section depth, while semi-transparent containers ensure text readability without sacrificing aesthetics. The site is built to evolve incrementally—each new project gets integrated live, as it matures.