If you are used to Codex CLI or Claude Code, the first problem in mainland China is often not the tool itself. It is account access, network reliability, payment, and service availability.

This article is not a model leaderboard. It does not try to prove that one model is universally better than another.

It answers a more practical question:

If I want a local command-line AI agent that can enter a project folder, read files, edit files, run commands, organize documents, and help with development work, what can I actually install and try in China?

My short answer:

  • If you can reliably use OpenAI, Codex CLI is still the most integrated experience.
  • If you want a more accessible local setup in China, OpenCode + MiniMax-M2.7 is the first alternative I would try.
  • If you already use Qwen Code, or you care about Qwen’s domestic ecosystem, hooks, MCP, and headless mode, Qwen Code + MiniMax-M2.7 is also worth configuring.

Here, “alternative” means an alternative to the local CLI agent workflow. It does not mean a complete replacement for the model quality, product ecosystem, or safety design behind Codex or Claude Code.

Compare CLIs, Not Just Models

A chat AI mainly answers questions.

A CLI coding agent works inside a real directory:

  • reads project structure
  • searches code and documents
  • edits files
  • runs shell commands
  • runs tests or builds
  • connects to MCP tools
  • follows project rules
  • leaves changes that can be reviewed in git diff

So the real comparison is not simply “which model is smarter”. It is a comparison of three tool stacks:

Codex CLI
OpenCode + MiniMax-M2.7
Qwen Code + MiniMax-M2.7

The agent harness matters. It decides how the model sees the filesystem, shell, MCP servers, permissions, project rules, context, and non-interactive execution.

Option 1: Codex CLI

Codex CLI is OpenAI’s official local command-line coding agent. The official docs position it as a terminal tool that can inspect repositories, edit files, and run commands. On first run, it can authenticate with a ChatGPT account or a developer credential.

Install:

npm install -g @openai/codex

Start:

cd /path/to/project
codex

Non-interactive mode:

codex exec -C /path/to/project \
  --sandbox workspace-write \
  "Explain this repository structure. Do not modify files."

For read-only analysis, use the default read-only behavior or explicitly tell Codex not to modify files.

Codex’s strength is product completeness. It has a mature permission model, sandboxing, approvals, AGENTS.md, skills, MCP, non-interactive codex exec, JSON output, and project instruction loading.

Its weakness in China is practical access. Not every user can reliably log in, pay, connect, and keep using OpenAI services.

Option 2: OpenCode + MiniMax-M2.7

OpenCode is an open-source AI coding agent for the terminal. Its official docs describe it as available through a terminal interface, desktop app, and IDE extension. It supports many providers, and MiniMax is listed in the provider directory.

MiniMax also publishes official guidance for using MiniMax-M2.7 in OpenCode: install OpenCode, run the authentication command, select MiniMax, enter your access credential, then start using it.

Install OpenCode:

curl -fsSL https://opencode.ai/install | bash

If the current terminal cannot find opencode, refresh your shell:

source ~/.zshrc
opencode --version

Configure MiniMax:

opencode auth login

Select MiniMax in the interactive flow and enter your MiniMax access credential.

Start inside a project:

cd /path/to/project
opencode

Inside the OpenCode TUI, run:

/models

Choose MiniMax-M2.7. If the model list is stale, refresh it first:

opencode models --refresh

Non-interactive test:

opencode run --dir /path/to/project \
  -m <provider/model> \
  "Explain this repository structure. Do not modify files."

Do not guess the <provider/model> string. Ask your local installation for the exact model ID:

opencode models minimax --refresh

The appeal of OpenCode + MiniMax is straightforward: OpenCode provides the local agent workflow; MiniMax-M2.7 provides a model endpoint that is easier to access from China. If you want something close to the Claude Code or Codex CLI workflow without relying on those products, this is the first combination I would test.

Option 3: Qwen Code + MiniMax-M2.7

Qwen Code is Qwen’s official terminal-based agentic coding tool. Its docs cover installation, interactive sessions, headless mode, MCP, approval modes, hooks, skills, LSP, and provider configuration.

Install:

curl -fsSL https://qwen-code-assets.oss-cn-hangzhou.aliyuncs.com/installation/install-qwen.sh | bash

Refresh your shell:

source ~/.zshrc
qwen --version

Qwen Code can use its own authentication flow. If you want to use MiniMax, configure MiniMax as an OpenAI-compatible provider.

I recommend using a provider configuration first, so the credential does not end up in shell history. Create or edit the user-level config:

mkdir -p ~/.qwen

Add this to ~/.qwen/settings.json:

{
  "modelProviders": {
    "openai": [
      {
        "id": "MiniMax-M2.7",
        "name": "MiniMax M2.7",
        "envKey": "MINIMAX_CREDENTIAL",
        "baseUrl": "https://api.minimaxi.com/v1",
        "generationConfig": {
          "timeout": 300000,
          "maxRetries": 2,
          "contextWindowSize": 204800
        }
      }
    ]
  }
}

Then export your credential in the current terminal and start Qwen Code:

export MINIMAX_CREDENTIAL="your MiniMax access credential"

qwen \
  --auth-type openai \
  --model MiniMax-M2.7

If you use the international MiniMax endpoint, use:

https://api.minimax.io/v1

Start inside a project:

cd /path/to/project
qwen

Headless test:

qwen -p "Explain this repository structure. Do not modify files."

For long-term use, define MiniMax under Qwen Code’s modelProviders configuration and reference the access credential through an environment variable. Qwen’s official docs recommend using envKey so credentials are read from the environment rather than hardcoded in settings.

Qwen Code + MiniMax is attractive because of the surrounding ecosystem. It is friendly to domestic users and includes MCP, hooks, approval modes, headless execution, and other workflow features. The caveat is that MiniMax is not Qwen Code’s default first-party route. The real quality depends on OpenAI-compatible adapter behavior, tool-call reliability, and local configuration.

Which One Should You Use?

If you can reliably use OpenAI and your main work is development, code review, refactoring, and test fixing, start with Codex CLI.

If you are in China and want a local agent workflow close to Claude Code or Codex CLI, try OpenCode + MiniMax-M2.7 first.

If you already use Qwen Code or want Qwen’s hooks, MCP, headless mode, and domestic ecosystem, configure Qwen Code + MiniMax-M2.7.

In one sentence:

Reliable OpenAI access: Codex CLI
Open alternative: OpenCode + MiniMax
Domestic Qwen workflow: Qwen Code + MiniMax

Comparison Table

DimensionCodex CLIOpenCode + MiniMax-M2.7Qwen Code + MiniMax-M2.7
Core positionOfficial OpenAI coding agentOpen-source CLI agent + MiniMax modelQwen Code agent + MiniMax OpenAI-compatible endpoint
China availabilityMay depend on account and network accessEasier to land locallyEasier to land locally
Installation difficultyLowLowLow
Interactive modecodexopencodeqwen
Non-interactive modecodex execopencode runqwen -p
Model accessMainly OpenAI stackMultiple providers, including MiniMaxMultiple providers via OpenAI-compatible APIs
Project rulesAGENTS.md, skillsAGENTS.md, rules, agentsQwen settings, memory, hooks, skills
MCPSupportedSupportedSupported
Permission controlMature sandbox and approval systemPermissions and agent configApproval modes and sandboxing
Best forUsers with stable OpenAI accessUsers who want an open CLI alternativeUsers who want Qwen workflows and domestic ecosystem

A Simple Local Test

Do not test these tools first inside a production repository. Create a small benchmark folder:

mkdir -p ~/ai-cli-bench
cd ~/ai-cli-bench
git init

Round one: read-only analysis.

Read the current project directory, explain its structure, and describe how you would organize a small Python CLI here. Do not modify any files.

Round two: tiny implementation.

Implement a Python CLI named task_tracker.py.
Requirements:
1. support add/list/done commands
2. store data in tasks.json
3. write README.md usage instructions
4. write a minimal test script
5. run the test after implementation

Round three: boundary control.

Modify only README.md. Summarize what files this project has and how to run the tests. Do not modify any Python files.

After every round, inspect the diff:

git diff

Score the result by behavior, not by how polished the answer sounds:

  • Did it complete the task?
  • Did it edit unrelated files?
  • Did it run verification?
  • Did it explain what changed?
  • Did it recover from failures?
  • Did it obey “do not modify files”?
  • Did it understand Chinese prompts reliably if you use Chinese?

Common Questions

Which MiniMax endpoint should I use in China?

MiniMax’s official docs list this OpenAI-compatible base URL for users in China:

https://api.minimaxi.com/v1

The international endpoint is:

https://api.minimax.io/v1

Why is MiniMax-M2.7 worth testing for CLI agents?

MiniMax describes M2.7 as having code understanding, multi-turn dialogue, and reasoning capabilities. Its API overview lists MiniMax-M2.7 and MiniMax-M2.7-highspeed with a context window of 204,800 tokens. Its tool-use docs describe tool calls and interleaved thinking.

That does not mean it performs identically in every CLI agent. The harness must preserve tool-call history, reasoning-related fields, context, and file-operation results correctly.

Why not just use MiniMax inside Codex CLI?

MiniMax does provide a Codex CLI configuration example, but the same MiniMax documentation labels Codex CLI as “Not Recommended” and warns that newer Codex CLI versions may have compatibility issues. That is why this article treats OpenCode and Qwen Code as the main MiniMax routes for ordinary users.

Can this fully replace Claude Code?

Not completely.

OpenCode + MiniMax and Qwen Code + MiniMax can replace part of the local CLI agent workflow: reading a project, editing files, running commands, handling small development tasks, and organizing documents.

But Claude Code and Codex CLI still have their own advantages in product integration, model adaptation, safety design, context engineering, and ecosystem maturity.

My Recommendation

If you simply want a local CLI agent that is easier to run in China, start with:

OpenCode + MiniMax-M2.7

If you already use Qwen Code, or want Qwen’s hooks, MCP, and headless features, try:

Qwen Code + MiniMax-M2.7

If you can reliably use OpenAI, keep Codex CLI as a mature reference. These options are not enemies. The practical method is to run all three through the same local tasks and compare behavior.

The best tool is the one that can understand your directory, respect your boundaries, complete the task, and leave a result you can inspect.

Sources