Mccranky's Blog

Turn On, Boot Up, Jack In

The Daily Challenge mode boasts a significant player base, even compared to other popular modes, and offers several exclusive weapons to boot. However, this is bad news for us freeloaders who, through various means, gained access to the game without ever paying the publisher a single cent. Some may consider piracy bad practice, but to be honest, I’m broke as f*ck.

Since we can’t do the challenge runs like the normal person would, I find it not much of a stretch to hack the game a bit further.

Overview

The approach we’re taking is to assign the blueprints to a chosen enemy. Naturally we’d want to assign it to an enemy we frequently run into and have a decent drop rate. It so happens that the zombie fits our description perfectly.

Let us have a look at the relevant part of its JSON description:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
"name": "Zombie",
"score": 2,
"maxPerRoom": 0,
"canBeElite": true,
"glowInnerColor": 11197250,
"volteDelay": 1,
"flesh1": 5669211,
"flesh2": 12303527,
"pfCost": 0.5,
"blueprints": [
{
"item": "HorizontalTurret",
"rarity": "Rare",
"minDifficulty": 0
},
{
"item": "Bleeder",
"rarity": "Always",
"minDifficulty": 0
},
{
"rarity": "Rare",
"minDifficulty": 1,
"item": "PrisonerBobby"
}
]

Bleeder is the internal name for Blood Sword, which is typically the first blueprint obtained by Dead Cells players on their initial run. As such, it can be readily exchanged for an item of our preference.

Blood Sword

Blood Sword

Targets:

  1. Swift Sword (internal name SpeedBlade) - first run
  2. Lacerating Aura (internal name DamageAura) - 5th run
  3. Meat Skewer (internal name DashSword) - 10th run

Visit the official wiki page for more information!

CellPacker

I use CellPacker to extract the data.cdb file so as to avoid having to read from the hexdump. It is a lot more comfortable to inspect an formatted JSON file!

CellPacker

CellPacker

We can open CellPacker by double-clicking CellPacker.jar, however I suggest running the following command to avoid headaches:

1
java -jar /path/to/CellPacker.jar

GitHub Repo: ReBuilders101/CellPacker

Click here to install CellPacker.jar.

res.pak

As the name suggests, the res.pak is the resource pack for the game itself and contains every aspect of what the graphics and cutscenes require to load. It also contains the json data files which store the underlying logic behind the interactions in the game.

To hack it, we need a hexdump editor. I use ghex because it’s fairly simple and easy to use. We can install it on our MacBook via Homebrew, using the following command:

1
brew install ghex

Note:
Before we begin the actual editing process, make sure you have a copy of your game saves and the resource pack we’re editing on. This is in case we make a mistake and corrupt the files.

ghex

Run ghex in Terminal and open res.pak. Locate Bleeder and replace it with the internal name of any of the three daily challenge blueprints.

Before

Please note that the resulting file must remain the same size as the original. Making significant changes will only complicate matters. For instance, Bleeder has 7 characters, while SpeedBlade and DamageAura have 10, and DashSword has 9. To address this discrepancy, I reduced the length of “Zombie” by the difference. Here’s an example:

After


References

Array of pointers to ints

Code:

1
2
3
4
5
6
7
8
9
10
11
12
13
#include <iostream>

int main()
{
int* ap[15];
int ai[15] = {};
for (int i = 0; i < 15; ++i) {
ai[i] = i;
ap[i] = &ai[i];
std::cout << *ap[i] << " " << ap[i]<< std::endl;
}
}

Console:

0 0x16d296d14
1 0x16d296d18
2 0x16d296d1c
3 0x16d296d20
4 0x16d296d24
5 0x16d296d28
6 0x16d296d2c
7 0x16d296d30
8 0x16d296d34
9 0x16d296d38
10 0x16d296d3c
11 0x16d296d40
12 0x16d296d44
13 0x16d296d48
14 0x16d296d4c

Pointer to an array of ints

Code:

1
2
3
4
5
6
7
8
9
10
11
12
#include <iostream>

int main()
{
int ai[15];
for (int i = 0; i < 15; i++) ai[i] = i;
int (*ptr)[] = &ai;
for (int i = 0; i < 15; i++) {
std::cout << (*ptr)[i] << " ";
}
}

Console

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Notes

  • The parentheses around *ptr are necessary because the [] operator has
    higher precedence than the * operator. Without the parentheses, the
    declaration would be parsed as an array of pointers to int, like int
    *ptr[15].

  • When I took the address of the array, I got a pointer to the first element
    of the array. However, the pointer itself does not contain any information
    about the size of the array.

Pointer to function taking a string argument;

returns an string

Code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#include <iostream>
#include <string>

void replaceX(std::string *str, std::string x)
{
size_t pos = (*str).find("${x}");
if (pos != std::string::npos)
(*str).replace(pos, 4, x);
}

std::string foo(std::string x)
{
std::string str = "hello this is ${x}";
replaceX(&str, x);
return str;
}

int main()
{
std::string (*fp)(std::string) = &foo;
std::cout << (*fp)("foo") << std::endl;
std::cout << (*fp)("bar") << std::endl;
}

Console:

hello this is foo
hello this is bar

In Bash, positional parameters are special variables that hold the arguments passed to a script or function. The positional parameters are numbered from 0 to 9, where $0 holds the name of the script or function, and $1 through $9 hold the first through ninth arguments, respectively.

$@ and $* both hold all the arguments passed to the script. Note that $@ is an array-like variable that holds all the positional parameters as separate elements, while $* is another variable that holds all the positional parameters as a single string.

Here’s an example:

#!/bin/bash

#!/bin/bash

echo "Script Name: $0"
echo "First Arg: $1"
echo "Second Arg: $2"
echo "All args (arr): $@"
echo "All args (str): $*"

Now let’s run the script with three arguments:

1
2
3
4
chmod u+x ./script # Grant execution privilege for user

./script a b c

The output should be something like this:

Script Name: ./script.sh
First Arg: a
Second Arg: b
All args (arr): a b c
All args (str): a b c

Coding along with the book Wicked Cool Shell Scripts- 101 Scripts for Linux, OS X, and UNIX Systems.

This is the script from Page 11:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#!/bin/bash

in_path()
{
cmd=$1
ourpath=$2
result=1
oldIFS=$IFS
IFS=":"

for directory in $ourpath; do

# If i double quoted `$ourpath`, it will not word split, and the loop will only run once.

if [ -x "$directory/$cmd" ]; then
result=0
fi
done

IFS=$oldIFS
return $result
}

checkForCmdInPath()
{
var=$1

if [ ! "$var" = "" ]; then
if [ "${var:0:1}" = "/" ]; then
# Checks whether the command represented by the `$var` is an absolute path or not.
if [ ! -x "$var" ]; then
return 1
fi
elif ! in_path "$var" "$PATH"; then
return 2
fi
fi
}

Positional parameters

See here

A Quick Word

It is good practice to put double quotes around variables in Bash because it prevents word splitting and globbing.

  • Word splitting is the process of breaking up a string into separate words based on whitespace. If a variable contains spaces, without quotes, Bash will treat each space-separated word as a separate argument. This can cause unexpected behavior if the variable is used as an argument to a command.

  • Globbing is the process of expanding wildcard characters such as * and ? into a list of matching filenames. If a variable contains a glob character, without quotes, Bash will try to expand it into a list of matching filenames. This can also cause unexpected behavior if the variable is used as an argument to a command.

By putting double quotes around variables, Bash treats the variable as a single argument, preserving any whitespace or glob characters within the variable. This helps to ensure that the variable is interpreted correctly and the script behaves as expected.

Special variables

$IFS

The internal field separator (IFS) is a special variable in Bash that specifies the delimiter used to separate fields in a string. By default, the IFS is set to whitespace (i.e., space, tab, and newline characters).

`$?

Display the return values of functions.

1
echo $?

You can use it immediately after calling a function with return value.

Slice

[ "${var:0:1}" = "/" ];

The slicing starts at index 0 and lasts for the duration of a single character.

Compare

String comparison

= tests if two strings are equal. For example, if [ "$string1" = "$string2" ] would be true if $string1 and $string2 have the same value.

Pattern matching

== tests if a string matches a pattern using globbing, which is a way to match filenames based on wildcard characters. For example, if [[ "$string" == a* ]] would be true if $string starts with the letter “a”.

I ran into this hassle the other day when my npm command failed to work in the command line. It took me a while to sort things out and left me in a foul mood. To prevent something like that from happening again, I decided to write this post. Nothing fancy - just some commands that might prove helpful in the long run.

node

Display global installation directory

1
npm root -g

If you installed node using the official package installer, the global installation directory for npm packages would be:

/usr/local/lib/node_modules

Or you could’ve installed it using Homebrew. In that case, the global installation directory would be:

/opt/homebrew/lib/node_modules

View global installation

1
npm ls -g
/opt/homebrew/lib
├── @e-hentai/home@1.6.0-alpha.9
├── express-generator@4.16.1
├── nodemon@2.0.22
└── npm@9.6.5

Update global installation

1
npm update -g

View global configuration

1
npm config list

The output should be on the lines of this:

; "builtin" config from /opt/homebrew/lib/node_modules/npm/npmrc

prefix = "/opt/homebrew" 

; "user" config from /Users/Mccranky/.npmrc

fetch-retry-maxtimeout = 300000 

; node bin location = /opt/homebrew/Cellar/node/20.0.0/bin/node
; node version = v20.0.0
; npm local prefix = /Users/Mccranky
; npm version = 9.6.5
; cwd = /Users/Mccranky
; HOME = /Users/Mccranky
; Run `npm config ls -l` to show all defaults.

From it, we can discern a few key instances that might prove helpful.

“builtin” config is stored at /opt/homebrew/lib/node_modules/npm/npmrc

“user” config is stored at /Users/Mccranky/.npmrc

Add/remove configuration

1
npm config set <name> <value>

For example, if I want to specify a range of time before timeout occurs, I can use the following command:

1
2
3
4
npm config set fetch-retry-maxtimeout 300000 // 5 minutes

npm config set fetch-retry-mintimeout 60000 // 1 minute

The same mechanism applys when it comes to how we delete configurations:

1
2
3
4
npm config delete fetch-retry-maxtimeout

npm config delete fetch-retry-mintimeout

express

I’ve only just started dabbling with express and my expertise is cut pretty thin, but the rule of thumb I tend to follow is:

  1. If you’re installing something that you want to use in your program using require('something'), then install it locally, at the root of your project.

  2. If you’re installing something that you want to use on the command line or something, install it globally, so that its binaries end up in your $PATH environment variable.

Based on this, you would want to install express-generator using the -g flag as you will use it as a command line tool, but you’d want to install express without this flag as it’s a module you will want to require() it in your application.

The key aspect of this demo is that the utilization of an immediately executing function in JavaScript generates a fresh variable scope, which distinguishes it from if, else, and while. The code presented below exemplifies this fact.

1
2
3
4
5
6
7
8
9
10
11
12
var foo = 123

if (1) {
var foo = 456
}

(_ => {
var foo = 123
})()

console.log(foo)

Output:

456

Notice how the function does not reset the value of variable foo? Now that’s something to be mindful of!

Puppeteer is a Node.js library that provides a high-level API to control headless Chrome or Chromium browsers. It is widely used for web scraping, testing, and automation, and is an essential tool for many developers who work with web applications.

Note that I’ll be demonstrating on ArchLinux

Because Puppeteer relies on Node.js, the first thing we do is create a project directory and initiate npm.

1
2
3
4
5
6
7
8
9
10
11
12
mkdir puppeteer-project;

cd puppeteer-project;

npm init -y

npm i puppeteer --save

sudo pacman -S libx11 libxcomposite libxdamage libxext libxi libxtst nss freetype2 harfbuzz

# Puppeteer requires some additional dependencies to be installed

Now we write our script:

1
vim puppeteer.js

The script should be on the lines of this template:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch({
executablePath: '/usr/bin/chromium',

// We can also drop this line and instead, set an environment variable in Bash.

// `$ export PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium`

headless: true
});
const page = await browser.newPage();
await page.goto('https://www.example.com');
await page.screenshot({ path: 'example.png' });
await browser.close();
})();

This code will launch Chromium in headless mode and navigate to https://www.example.com, take a screenshot of the page, and then close the browser.

More content about Puppeteer coming up!

Installation

To install puppet server first add the upstream GPG key:

$ gpg --fetch-keys https://yum.puppetlabs.com/RPM-GPG-KEY-puppet-20250406
Then install the puppetserverAUR package.
1
2
3
4
5
6
git clone https://aur.archlinux.org/puppetserver.git

cd puppetserver/

makepkg -si

Having questions about the makepkg command? Visit here!

Afterwards, enable and start the puppetserver service.
1
2
3
4
5
6
sudo systemctl enable puppetserver.service

# Add to "/etc/systemd/system/multi-user.target.wants".

sudo systemctl start puppetserver.service

Delete a key

List the keys in your keyring using the command gpg --list-keys:

$ gpg --list-keys
/root/.gnupg/pubring.kbx
------------------------
pub   rsa4096 2019-04-08 [SC] [expires: 2025-04-06]
      D6811ED3ADEEB8441AF5AA8F4528B6CD9E61EF26
uid           [ unknown] Puppet, Inc. Release Key (Puppet, Inc. Release Key) <release@puppet.com>

For example, the key ID of the instance above is “D6811ED3ADEEB8441AF5AA8F4528B6CD9E61EF26”.

Delete the key using the command gpg --delete-keys [key ID].

Note that deleting a key from your keyring will prevent you from verifying any signatures made with that key in the future. If you need to use the key again later, you will need to fetch it again using the gpg --fetch-keys command.

Configuration

The Puppet Server’s configuration files are stored in /etc/puppetlabs/puppetserver/.

Solution

Our fix to the problem is to remove /mnt/share entirely and mount something else in its place.

This is the portion of the startup process which showed the error:

[FAILED] Failed to mount /mnt/share.
See 'systemctl status mnt-share.mount' for details

By the way, this post is for those who followed my last tutorial on using Samba service to mount a shared directory on ArchLinux VM.

Check mount unit status

We’ll start by following the instruction to check the status of the mount unit by running the suggested command:

1
systemctl status mnt-share.mount

This will give us more information about the specific error that occurred.

Warning: The unit file, source configuration file or drop-ins of mnt-share.moun>
x mnt-share.mount - /mnt/share
     Loaded: loaded (/etc/fstab; generated)
     Active: failed (Result: exit-code) since Thu 2023-04-27 02:45:46 UTC; 40mi>
      Where: /mnt/share
       What: //localhost/share
       Docs: man:fstab(5)
             man:systemd-fstab-generator(8)
        CPU: 28ms

Apr 27 02:45:46 alarm systemd[1]: Mounting /mnt/share...
Apr 27 02:45:46 alarm mount[379]: mount error(111): could not connect to ::1mou>
Apr 27 02:45:46 alarm systemd[1]: mnt-share.mount: Mount process exited, code=e>
Apr 27 02:45:46 alarm systemd[1]: mnt-share.mount: Failed with result 'exit-cod>
Apr 27 02:45:46 alarm systemd[1]: Failed to mount /mnt/share.

Side Content

Check file system properties

Since we’ve decided to remove the mount /mnt/share instead of trying to fix it, we shall proceed to learn a little about how to mount a disk. For instance, if we want to mount an NTFS file system and ensure that the file system is accessible and error-free, we can use the following command to check it for errors:

1
2
3
4
5
6
7
sudo pacman -S ntfs-3g

# Install the `ntfs-3g` package which provides the `ntfsfix` tool.

sudo ntfsfix /dev/sda1

# Replace `/dev/sda1` with the appropriate device and partition.

If the virtual hard disk has multiple partitions, we’ll need to determine which partition contains the NTFS file system we want to mount. Oftentimes we use the lsblk command to list the available storage devices and their partitions, and the blkid command to list the UUIDs of the partitions.

$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sr0     11:0    1 1024M  0 rom  
vda    253:0    0  9.8G  0 disk 
|-vda1 253:1    0  200M  0 part /boot
`-vda2 253:2    0  9.6G  0 part /

In my case, the virtual machine has a single virtual hard disk, which is identified as /dev/vda. This virtual hard disk has two partitions, /dev/vda1 and /dev/vda2, which are mounted at /boot and /, respectively.

/dev/vda1 is a 200MB partition formatted as a Linux file system such as ext4 or xfs, and contains the kernel and other boot files.

/dev/vda2 is a 9.6GB partition also formatted as a Linux file system, and contains the root file system for the Arch Linux operating system.

Mount NTFS file system

If we want to mount an additional NTFS file system on our virtual machine, we can create a new directory to use as the mount point, such as /mnt/ntfs, and then add an entry to the /etc/fstab file to mount the NTFS file system at that mount point.

For example, if the NTFS file system is located on /dev/vda3, we can add the following line to the /etc/fstab file:

1
/dev/vda3 /mnt/ntfs ntfs-3g defaults 0 0

Or if you’ve run blkid and looked up the UUID of the file system you wanted to mount, you can also swap the line above for the following one:

1
2
3
UUID=12345678-1234-1234-1234-1234567890ab /mnt/ntfs ntfs-3g defaults 0 0

# Pretend for a sec that the UUID of `/dev/vda3` is "12345678-1234-1234-1234-1234567890ab"

After adding the entry to the /etc/fstab file, we need to run sudo mount -a. This should mount the NTFS file system at the specified mount point, and the mount point should be automatically mounted at boot time.

Remove Samba entries

Edit Samba’s configuration file /etc/samba/smb.conf and remove everything in the [mnt] field.

Then, navigate to /etc/fstab and comment out anything associated with /mnt/share:

# share /mnt/share 9p trans=virtio,nofail 0 0
# //localhost/share /mnt/share cifs user=mccranky,password=Rogue12 0 0

Since we’ve already removed Samba service on our machine, we have no need of checking whether if the mount options specified in the mount unit are correct. But if you haven’t removed it yet, feel free to edit the mount unit file located at /etc/systemd/system/mnt-share.mount and modify the mount options.

Remove the mount unit for /mnt/share:

1
sudo rm /etc/systemd/system/mnt-share.mount

Reload the systemd daemon to ensure that the changes take effect:

1
sudo systemctl daemon-reload

When we check the status of mnt-share.mount, we should get:

 $ sudo systemctl status mnt-share.mount
x mnt-share.mount
     Loaded: not-found (Reason: Unit mnt-share.mount not found.)
     Active: failed (Result: exit-code) since Thu 2023-04-27 02:45:46 UTC; 1h 5>
        CPU: 28ms

Apr 27 02:45:46 alarm systemd[1]: Mounting /mnt/share...
Apr 27 02:45:46 alarm mount[379]: mount error(111): could not connect to ::1mou>
Apr 27 02:45:46 alarm systemd[1]: mnt-share.mount: Mount process exited, code=e>
Apr 27 02:45:46 alarm systemd[1]: mnt-share.mount: Failed with result 'exit-cod>
Apr 27 02:45:46 alarm systemd[1]: Failed to mount /mnt/share.

Last we can finally run sudo rm -r /mnt/share to remove /mnt/share from our machine.

Perfect! Now when we reboot again we should receive no errors.

Install Dependencies

SPICE guest tools

See SPICE guest tools

SPICE WebDAV

SPICE WebDAV is required for QEMU directory sharing as an alternative to VirtFS.

Since we’re on ArchLinux, we should run the following command:

1
sudo pacman -S phodav

Now we should want to visit the shared directory.

Normally, it is exposed as a WebDAV mount on the guest’s localhost (typically on port 9843).

To access it, we need a browser.

Firefox

I don’t have one yet, so I’ll Install Firefox via yay.

1
yay -S firefox

Before we move on, let’s configure the DISPLAY environment variable first. Again, I need to install one:

1
2
3
sudo pacman -S xorg xorg-xinit xfce4

# Once installation is complete, `reboot` and log-back-in

Before we proceed, make sure to check the following conditions:

X Window System is actually running: You can start the X Window System by running the command startx in a terminal window.

You have permission to access the X Window System: You may need to add your user account to the video group by running the command sudo usermod -aG video <username>.

X Window System is configured to allow remote connections: You may need to edit the /etc/X11/xinit/xserverrc file to include the -listen tcp option. For example, you can add the line exec /usr/bin/X -listen tcp to the file.


We may face some problems when we run startx because X server requires some sort of display device or screen so that it can display graphical output, and since ArchLinux is text-based, we’ll have to go through a few extra steps.

  • Run sudo pacman -S xf86-video-fbdev to install the xf86-video-fbdev package. This package provides a generic framebuffer driver that can be used in place of a specific display driver.

  • Edit the /etc/X11/xorg.conf.d/10-monitor.conf file to specify the fbdev driver. We can do this by adding the following lines to the file:

1
2
3
4
Section "Device"
Identifier "fbdev"
Driver "fbdev"
EndSection

That should do the trick when we run startx again.

If the X server still fails to start, you may need to investigate further by checking the /var/log/Xorg.0.log file for additional information about the error.


After double-checking the conditions listed above, we can now run the command export DISPLAY=:0 to set the display environment variable to the default value for the X Window System.

At last we launch Firefox and navigate to http://127.0.0.1:9843.

1
firefox http://127.0.0.1:9843

We can also try running Firefox with the --no-remote option. This option tells Firefox to open a new instance of the browser, rather than connecting to an existing instance.

1
firefox --no-remote http://127.0.0.1:9843

If the SPICE WebDAV service is running correctly, you should be able to see the page in the Firefox browser window.

Headless browser

Puppeteer

See here

i2pdbrowser

Clone this repo:

1
git clone https://github.com/PurpleI2P/i2pdbrowser.git

Navigate to the README.md file for more information

0%