CELLS - Automatic release of Work Orders into CELLS


Automatic Release of Work Orders into CELLS:

Work Orders can be manually released into CELLS using the CELLS Administrator module. However, some customers prefer to automate this process, enabling the automatic release of Work Orders and Sales Orders from their ERP, MRP, and PLM systems into CELLS. Given the wide variety of ERP, MRP, and PLM systems, there are several approaches to this automation, including using existing drivers or contracting Unisoft, etc. to create the connection. Below are other methods to achieve this automation; for more details, contact Unisoft.



Using features of the database to automatically release Work Orders:

Some databases, such as Microsoft Access, can import data directly from Excel, XML, or text files. These features allow you to use the database connected to CELLS to import work order release data from your ERP, MRP, and PLM systems. For example, an ERP system could write a released work order directly to the CELLS database tables, or it could write a release work order to Microsoft Excel, which is then imported into the CELLS database tables.            


Using a script to automatically release of Work Orders:

Below is one method for the automatic release of work orders into CELLS using a Python script: In this example, Work Order information is exported from ERP, MRP, and PLM systems into a CSV file. This file is then imported directly into an Microsoft Access database connected to CELLS to populate the Work Order tables and other related tables.

Note: The Unisoft CELLS Workflow MES interface uses ODBC (Open Database Connectivity). ODBC is a broadly accepted, vendor-neutral mechanism for accessing relational databases. Any ODBC-compliant application can access any Database Management System (DBMS) for which a driver is installed. Drivers are available for all major DBMSs and even for text or CSV files. Consequently, CELLS Workflow can communicate with Microsoft Access, SQL Server, Oracle, Sybase, and even Excel.

The following Python script utilizes the pandas library to handle the CSV file and pyodbc to interact with the Microsoft Access database. Ensure that you have these libraries installed before running the script. If not, you can install them using pip:

bash
Copy code
pip install pandas pyodbc
Here’s a script that reads a CSV file and imports its contents into an Access database:

python
Copy code
import pandas as pd
import pyodbc 

# Specify the path to your CSV file
csv_file_path = 'path_to_your_csv_file.csv'

 
# Load the CSV file into a DataFrame
data = pd.read_csv(csv_file_path)

# Specify the connection parameters to your Access database
database_path = 'path_to_your_access_database.accdb'
conn_str = f'DRIVER={{Microsoft Access Driver (*.mdb, *.accdb)}};DBQ={database_path}'

# Connect to the Access database
conn = pyodbc.connect(conn_str)
cursor = conn.cursor()

# Specify the table name where you want to insert the data
table_name = 'your_table_name'

# Prepare the SQL query to insert data
# This assumes the table and columns already exist and match the CSV structure
placeholders = ', '.join(['?'] * len(data.columns))
columns = ', '.join(data.columns)
sql = f'INSERT INTO {table_name} ({columns}) VALUES ({placeholders})'

# Insert data from DataFrame to Access table
for row in data.itertuples(index=False):
    cursor.execute(sql, row)

# Commit the changes and close the connection
conn.commit()
conn.close()

 

Things to Note:

CSV File Path: Replace 'path_to_your_csv_file.csv' with the actual path to your CSV file.

Database and Table: You need to adjust 'path_to_your_access_database.accdb' and 'your_table_name' to match your actual database file and table name.

Data Compatibility: Ensure the columns in the CSV file match those in the Access database table in both name and type.

Error Handling: You might want to add error handling to catch and log or handle any issues that might occur during the database operations.

 

Add a check for newer CSV file:

To check if the CSV file is a newer version compared to when it was last checked, you can use the file's modification timestamp. This can be managed by storing the timestamp of the last check and comparing it to the current modification time of the file. Here's how you can integrate this functionality into the script:

- Check the file's last modification time.
- Compare it with the stored last checked time.
- Proceed with processing if the file is newer.
- Here's how you might modify the previous script to include this functionality:

python
Copy code
import pandas as pd
import pyodbc
import os
import time

# Function to get the last modification time of a file
def get_file_mod_time(file_path):
    return os.path.getmtime(file_path) 

# Function to read the last checked time from a simple text file
def read_last_checked_time(file_path):
    try:
        with open(file_path, 'r') as f:
            return float(f.read().strip())
    except FileNotFoundError:
        return 0.0  # Returns 0 if the file does not exist, implying never checked 

# Function to update the last checked time
def update_last_checked_time(file_path, last_checked):
   with open(file_path, 'w') as f:
        f.write(str(last_checked)) 

# Specify paths
csv_file_path = 'path_to_your_csv_file.csv'
last_checked_file_path = 'last_checked.txt'
database_path = 'path_to_your_access_database.accdb' 

# Check if the CSV file is newer
last_checked_time = read_last_checked_time(last_checked_file_path)
current_mod_time = get_file_mod_time(csv_file_path) 

if current_mod_time > last_checked_time:
    # Load the CSV file into a DataFrame
    data = pd.read_csv(csv_file_path) 

    # Connection string and SQL setup
    conn_str = f'DRIVER={{Microsoft Access Driver (*.mdb, *.accdb)}};DBQ={database_path}'
    conn = pyodbc.connect(conn_str)
    cursor = conn.cursor()
    table_name = 'your_table_name'
    placeholders = ', '.join(['?'] * len(data.columns))
    columns = ', '.join(data.columns)
    sql = f'INSERT INTO {table_name} ({columns}) VALUES ({placeholders})' 

    # Insert data from DataFrame to Access table
    for row in data.itertuples(index=False):
        cursor.execute(sql, row) 

    # Commit the changes and close the connection
    conn.commit()
    conn.close() 

    # Update last checked time
    update_last_checked_time(last_checked_file_path, current_mod_time)
    print("Database updated with new CSV data.")
else:
    print("CSV file has not changed since the last check.")   

 

Explanation: 

File Timestamps: The script uses os.path.getmtime() to get the last modified time of the CSV file. 

Storing Last Checked Time: The script reads and writes the last checked time to a simple text file (last_checked.txt). If the file doesn't exist, it assumes the file has never been checked. 

Conditional Processing: The database is only updated if the CSV file has been modified since it was last checked.
 

This approach ensures that your database is only updated when there are new changes in the CSV file, which is efficient for cases where the CSV file might not change frequently. If you need more robust version control or frequent updates, you might consider adding more detailed logging or using a more structured data storage for the timestamps.

 


Contact our webmaster (enable JavaScript for the email address) with questions or comments about this web site.

Web Consulting by Dorene Matney
© 2024, Unisoft