The first goal for this type of process is to reduce the impact on the source system as much as possible. Do the majority of the 'work' on the destination - since you can control when and how that data is published and made available to the users.
If you have the ability to filter the data in the source to only those rows that have changed since the last time the process ran - then you can limit the amount of data being extracted from the source and easily perform an update/insert operation. That can be done either in SSIS using the SCD (slowly changing dimension) task - or through a staging table and a merge or upsert (update/insert) process.
If you cannot determine the rows that have changed - then you must pull all rows from the table each time. You can then use the SCD task or a staging table and merge/upsert process. However - pulling all the data each week also allows for an easier process by truncating the destination table and performing a full load.
Note: 2 million rows is not really a lot of data - and shouldn't take a long time to extract and load. I have setup packages that extracted 100's of millions of rows across multiple tables using truncate/load process to 'refresh' the data every day - and that process took less than 30 minutes total.
If you want to track the changed data...that is, you want to be able to see what the data looked like last week before it was changed, you should look into temporal tables in the destination. Using temporal tables you can perform the merge/upsert and a history table will automatically be created for you - where the history table would contain the rows values prior to being updated.
And finally - you need to determine what columns need to be checked for changes and what are the business keys. The SCD task walks you through that setup and builds the lookups for you - where it looks up the data in the destination based on the business keys and redirects the row to either an update task or an insert task.