Nirajan Bohara

1. Role Name & Department

Role: QA Engineer / Full-Time Tester – Cashkr

Department: Technology / Quality Assurance


2. Reports To

Reports To: IT Manager – Shahid Shaikh

(Works closely with: Ayaz – Website, Suraj – Backend/API, Resham – Admin/DevOps, Bikesh – UI/UX)


3. Role Purpose (1–2 lines)

Test every change across website, apps, admin panel, and backend flows to catch bugs before release and ensure everything is stable, user-ready, and safe to push to production.


4. Key Result Areas (KRAs)

KRA 1: End-to-End Testing of Core Flows

Ensure all critical flows (lead → order → pickup, wallet/credits, vendor operations, CX workflows) are tested before go-live.

KRA 2: Defect Detection & Clear Reporting

Identify bugs early and log them with clear steps, evidence, and impact so developers can fix quickly.

KRA 3: Regression & Release Readiness

Run regression tests before deployments so new changes don’t break existing, live features.

KRA 4: Test Planning & Coverage

Create and maintain test cases/suites so coverage improves over time across web, app, admin, and APIs.

KRA 5: Collaboration for Quality

Work closely with devs and design to clarify expected behaviour and improve user experience via feedback.


5. KPIs (measurable)

  1. Bugs Found in QA vs Bugs Found in Production

    • Higher % caught in QA, fewer P1 issues detected by real users/teams after release.

  2. Test Coverage for Critical Journeys

    • % of key flows that have written test cases and are executed before each major release.

  3. Regression Execution Rate

    • % of planned regression tests actually run before deployments (especially for major releases).

  4. Bug Report Quality

    • % of tickets that are reproducible, have clear steps, screenshots/video, and proper severity.

    • (Measured via feedback from Shahid and devs.)

  5. Retest Turnaround Time

    • Average time taken to re-test and close bugs once developers mark them as “fixed”.

  6. Escaped P1/P2 Issues per Month

    • Number of critical/high bugs that reach production after a “tested” release (aim to reduce trend).

  7. Test Case Maintenance & Updates

    • % of key modules that have up-to-date test cases when features change.


6. Core Processes Owned – SOPs

Nirajan is owner / co-owner of these QA SOPs:

  1. Test Case & Test Suite Management SOP

    • How test cases are written, updated, organised (by module: web, app, admin, vendor, API).

  2. Release Testing & Regression SOP

    • Standard checklist for what must be tested before any release (hotfix, minor, major).

  3. Bug Logging & Severity SOP

    • How to log bugs (tool/format), define severity (P1–P4), environment details, and link screenshots/videos.

  4. Cross-Platform Testing SOP

    • Ensuring coverage across devices: mobile web, desktop web, Android app, iOS app, vendor app, Admin Panel.

  5. Sanity Testing After Deployment SOP

    • Quick post-release checks in production (or production-like) to ensure key flows work.

  6. Co-ordination with Dev & Design SOP

    • How to clarify expected behaviour, get test builds, request logs, and confirm when something is a bug vs expected.


7. Weekly / Monthly Reporting

Weekly Reporting

To:

  • IT Manager – Shahid

  • With visibility to: Ayaz, Suraj, Resham, Bikesh (and others when relevant)

Format: Short summary in Slack / Notion

Include:

  • Total bugs logged this week (by module + severity P1/P2/P3).

  • Areas tested (e.g., “Website evaluation flow”, “Admin Vendor assignment”, “App checkout”).

  • Critical/High issues still open and who they’re assigned to.

  • Any recurring patterns noticed (e.g., frequent issues in a specific module).

  • Plan for next week’s testing focus.


Monthly Reporting

To:

  • Shahid

  • Ibrahim (CEO)

  • Tech leads (and Ops/CX if useful)

Format: 1 small section / slide in monthly review

Include:

  • Bug trends:

    • Total bugs by module & severity.

    • Comparison with previous month (up/down).

  • Escaped critical issues (P1/P2 found after release) and what’s being done to prevent them.

  • Coverage snapshot:

    • Which flows are well-covered vs weak areas (need more test cases).

  • Suggestions to improve quality:

    • Process improvements, common problem areas, need for automation later, etc.


Was this article helpful?
© 2026 BigBold Technologies Pvt. Ltd.